Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Introduction to Theoretical Modelling
NASA Astrophysics Data System (ADS)
Davis, Matthew J.; Gardiner, Simon A.; Hanna, Thomas M.; Nygaard, Nicolai; Proukakis, Nick P.; Szymańska, Marzena H.
2013-02-01
We briefly overview commonly encountered theoretical notions arising in the modelling of quantum gases, intended to provide a unified background to the `language' and diverse theoretical models presented elsewhere in this book, and aimed particularly at researchers from outside the quantum gases community.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Theoretical Models of Astrochemical Processes
NASA Technical Reports Server (NTRS)
Charnley, Steven
2009-01-01
Interstellar chemistry provides a natural laboratory for studying exotic species and processes at densities, temperatures, and reaction rates. that are difficult or impractical to address in the laboratory. Thus, many chemical reactions considered too sloe by the standards of terrestrial chemistry, can be 'observed and modeled. Curious proposals concerning the nature and chemistry of complex interstellar organic molecules will be described. Catalytic reactions on "rain surfaces can, in principle, lead to a lame variety of species and this has motivated many laboratory and theoretical studies. Gas phase processes may also build lame species in molecular clouds. Future laboratory data and computational tools needed to construct accurate chemical models of various astronomical sources to be observed by Herschel and ALMA will be outlined.
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
Isodesmic reaction for accurate theoretical pKa calculations of amino acids and peptides.
Sastre, S; Casasnovas, R; Muñoz, F; Frau, J
2016-04-28
Theoretical and quantitative prediction of pKa values at low computational cost is a current challenge in computational chemistry. We report that the isodesmic reaction scheme provides semi-quantitative predictions (i.e. mean absolute errors of 0.5-1.0 pKa unit) for the pKa1 (α-carboxyl), pKa2 (α-amino) and pKa3 (sidechain groups) of a broad set of amino acids and peptides. This method fills the gaps of thermodynamic cycles for the computational pKa calculation of molecules that are unstable in the gas phase or undergo proton transfer reactions or large conformational changes from solution to the gas phase. We also report the key criteria to choose a reference species to make accurate predictions. This method is computationally inexpensive and makes use of standard density functional theory (DFT) and continuum solvent models. It is also conceptually simple and easy to use for researchers not specialized in theoretical chemistry methods. PMID:27052591
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Accurate spectral modeling for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Gupta, S. K.
1977-01-01
Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.
Universality: Accurate Checks in Dyson's Hierarchical Model
NASA Astrophysics Data System (ADS)
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
Theoretical models for supernovae
Woosley, S.E.; Weaver, T.A.
1981-09-21
The results of recent numerical simulations of supernova explosions are presented and a variety of topics discussed. Particular emphasis is given to (i) the nucleosynthesis expected from intermediate mass (10sub solar less than or equal to M less than or equal to 100 Msub solar) Type II supernovae and detonating white dwarf models for Type I supernovae, (ii) a realistic estimate of the ..gamma..-line fluxes expected from this nucleosynthesis, (iii) the continued evolution, in one and two dimensions, of intermediate mass stars wherein iron core collapse does not lead to a strong, mass-ejecting shock wave, and (iv) the evolution and explosion of vary massive stars (M greater than or equal to 100 Msub solar of both Population I and III. In one dimension, nuclear burning following a failed core bounce does not appear likely to lead to a supernova explosion although, in two dimensions, a combination of rotation and nuclear burning may do so. Near solar proportions of elements from neon to calcium and very brilliant optical displays may be created by hypernovae, the explosions of stars in the mass range 100 M/sub solar/ to 300 M/sub solar/. Above approx. 300 M/sub solar/ a black hole is created by stellar collapse following carbon ignition. Still more massive stars may be copious producers of /sup 4/He and /sup 14/N prior to their collapse on the pair instability.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
Theoretical Modeling for Hepatic Microwave Ablation
Prakash, Punit
2010-01-01
Thermal tissue ablation is an interventional procedure increasingly being used for treatment of diverse medical conditions. Microwave ablation is emerging as an attractive modality for thermal therapy of large soft tissue targets in short periods of time, making it particularly suitable for ablation of hepatic and other tumors. Theoretical models of the ablation process are a powerful tool for predicting the temperature profile in tissue and resultant tissue damage created by ablation devices. These models play an important role in the design and optimization of devices for microwave tissue ablation. Furthermore, they are a useful tool for exploring and planning treatment delivery strategies. This review describes the status of theoretical models developed for microwave tissue ablation. It also reviews current challenges, research trends and progress towards development of accurate models for high temperature microwave tissue ablation. PMID:20309393
Theoretical Models of Spintronic Materials
NASA Astrophysics Data System (ADS)
Damewood, Liam James
In the past three decades, spintronic devices have played an important technological role. Half-metallic alloys have drawn much attention due to their special properties and promised spintronic applications. This dissertation describes some theoretical techniques used in first-principal calculations of alloys that may be useful for spintronic device applications with an emphasis on half-metallic ferromagnets. I consider three types of simple spintronic materials using a wide range of theoretical techniques. They are (a) transition metal based half-Heusler alloys, like CrMnSb, where the ordering of the two transition metal elements within the unit cell can cause the material to be ferromagnetic semiconductors or semiconductors with zero net magnetic moment, (b) half-Heusler alloys involving Li, like LiMnSi, where the Li stabilizes the structure and increases the magnetic moment of zinc blende half-metals by one Bohr magneton per formula unit, and (c) zinc blende alloys, like CrAs, where many-body techniques improve the fundamental gap by considering the physical effects of the local field. Also, I provide a survey of the theoretical models and numerical methods used to treat the above systems.
NASA Astrophysics Data System (ADS)
Grassi, Alba; Mariño, Marcos
2015-02-01
Some matrix models admit, on top of the usual 't Hooft expansion, an M-theory-like expansion, i.e. an expansion at large N but where the rest of the parameters are fixed, instead of scaling with N . These models, which we call M-theoretic matrix models, appear in the localization of Chern-Simons-matter theories, and also in two-dimensional statistical physics. Generically, their partition function receives non-perturbative corrections which are not captured by the 't Hooft expansion. In this paper, we discuss general aspects of these type of matrix integrals and we analyze in detail two different examples. The first one is the matrix model computing the partition function of supersymmetric Yang-Mills theory in three dimensions with one adjoint hypermultiplet and N f fundamentals, which has a conjectured M-theory dual, and which we call the N f matrix model. The second one, which we call the polymer matrix model, computes form factors of the 2d Ising model and is related to the physics of 2d polymers. In both cases we determine their exact planar limit. In the N f matrix model, the planar free energy reproduces the expected behavior of the M-theory dual. We also study their M-theory expansion by using Fermi gas techniques, and we find non-perturbative corrections to the 't Hooft expansion.
A quick accurate model of nozzle backflow
NASA Technical Reports Server (NTRS)
Kuharski, R. A.
1991-01-01
Backflow from nozzles is a major source of contamination on spacecraft. If the craft contains any exposed high voltages, the neutral density produced by the nozzles in the vicinity of the craft needs to be known in order to assess the possibility of Paschen breakdown or the probability of sheath ionization around a region of the craft that collects electrons for the plasma. A model for backflow has been developed for incorporation into the Environment-Power System Analysis Tool (EPSAT) which quickly estimates both the magnitude of the backflow and the species makeup of the flow. By combining the backflow model with the Simons (1972) model for continuum flow it is possible to quickly estimate the density of each species from a nozzle at any position in space. The model requires only a few physical parameters of the nozzle and the gas as inputs and is therefore ideal for engineering applications.
Accurate Drawbead Modeling in Stamping Simulations
NASA Astrophysics Data System (ADS)
Sester, M.; Burchitz, I.; Saenz de Argandona, E.; Estalayo, F.; Carleer, B.
2016-08-01
An adaptive line bead model that continually updates according to the changing conditions during the forming process has been developed. In these calculations, the adaptive line bead's geometry is treated as a 3D object where relevant phenomena like hardening curve, yield surface, through thickness stress effects and contact description are incorporated. The effectiveness of the adaptive drawbead model will be illustrated by an industrial example.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Przybylski, D.; Shelyag, S.; Cally, P. S.
2015-07-01
We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.
An articulated statistical shape model for accurate hip joint segmentation.
Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian
2009-01-01
In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159
APPRENTICESHIP--A THEORETICAL MODEL.
ERIC Educational Resources Information Center
DUFTY, NORMAN F.
AN INQUIRY INTO RECRUITMENT OF APPRENTICES TO SKILLED TRADES IN WESTERN AUSTRALIA INDICATED LITTLE CORRELATION BETWEEN THE NUMBER OF NEW APPRENTICES AND THE LEVEL OF INDUSTRIAL EMPLOYMENT OR THE TOTAL NUMBER OF APPRENTICES. THIS ARTICLE ATTEMPTS TO OUTLINE A MATHEMATICAL MODEL OF AN APPRENTICESHIP SYSTEM AND DISCUSS ITS IMPLICATIONS. THE MODEL, A…
Theoretical Modelling of Hot Stars
NASA Astrophysics Data System (ADS)
Najarro, F.; Hillier, D. J.; Figer, D. F.; Geballe, T. R.
1999-06-01
Recent progress towards model atmospheres for hot stars is discussed. A new generation of NLTE wind blanketed models, together with high S/N spectra of the hot star population in the central parsec, which are currently being obtained, will allow metal abundance determinations (Fe, Si, Mg, Na, etc). Metallicity studies of hot stars in the IR will provide major constraints not only on the theory of evolution of massive stars but also on our efforts to solve the puzzle of the central parsecs of the Galaxy. Preliminary results suggest that the metallicity of the Pistol Star is 3 times solar, thus indicating strong chemical enrichment of the gas in the Galactic Center.
Methods for accurate homology modeling by global optimization.
Joo, Keehyoung; Lee, Jinwoo; Lee, Jooyoung
2012-01-01
High accuracy protein modeling from its sequence information is an important step toward revealing the sequence-structure-function relationship of proteins and nowadays it becomes increasingly more useful for practical purposes such as in drug discovery and in protein design. We have developed a protocol for protein structure prediction that can generate highly accurate protein models in terms of backbone structure, side-chain orientation, hydrogen bonding, and binding sites of ligands. To obtain accurate protein models, we have combined a powerful global optimization method with traditional homology modeling procedures such as multiple sequence alignment, chain building, and side-chain remodeling. We have built a series of specific score functions for these steps, and optimized them by utilizing conformational space annealing, which is one of the most successful combinatorial optimization algorithms currently available.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Theoretical Modeling of Interstellar Chemistry
NASA Technical Reports Server (NTRS)
Charnley, Steven
2009-01-01
The chemistry of complex interstellar organic molecules will be described. Gas phase processes that may build large carbon-chain species in cold molecular clouds will be summarized. Catalytic reactions on grain surfaces can lead to a large variety of organic species, and models of molecule formation by atom additions to multiply-bonded molecules will be presented. The subsequent desorption of these mixed molecular ices can initiate a distinctive organic chemistry in hot molecular cores. The general ion-molecule pathways leading to even larger organics will be outlined. The predictions of this theory will be compared with observations to show how possible organic formation pathways in the interstellar medium may be constrained. In particular, the success of the theory in explaining trends in the known interstellar organics, in predicting recently-detected interstellar molecules, and, just as importantly, non-detections, will be discussed.
Theoretical models of helicopter rotor noise
NASA Technical Reports Server (NTRS)
Hawkings, D. L.
1978-01-01
For low speed rotors, it is shown that unsteady load models are only partially successful in predicting experimental levels. A theoretical model is presented which leads to the concept of unsteady thickness noise. This gives better agreement with test results. For high speed rotors, it is argued that present models are incomplete and that other mechanisms are at work. Some possibilities are briefly discussed.
NASA Astrophysics Data System (ADS)
Feller, David; Peterson, Kirk A.; Dixon, David A.
2008-11-01
High level electronic structure predictions of thermochemical properties and molecular structure are capable of accuracy rivaling the very best experimental measurements as a result of rapid advances in hardware, software, and methodology. Despite the progress, real world limitations require practical approaches designed for handling general chemical systems that rely on composite strategies in which a single, intractable calculation is replaced by a series of smaller calculations. As typically implemented, these approaches produce a final, or "best," estimate that is constructed from one major component, fine-tuned by multiple corrections that are assumed to be additive. Though individually much smaller than the original, unmanageable computational problem, these corrections are nonetheless extremely costly. This study presents a survey of the widely varying magnitude of the most important components contributing to the atomization energies and structures of 106 small molecules. It combines large Gaussian basis sets and coupled cluster theory up to quadruple excitations for all systems. In selected cases, the effects of quintuple excitations and/or full configuration interaction were also considered. The availability of reliable experimental data for most of the molecules permits an expanded statistical analysis of the accuracy of the approach. In cases where reliable experimental information is currently unavailable, the present results are expected to provide some of the most accurate benchmark values available.
Theoretical Models and Processes of Reading.
ERIC Educational Resources Information Center
Singer, Harry, Ed.; Ruddell, Robert B., Ed.
The first section of this two-part collection of articles contains six papers and their discussions read at a symposium on Theoretical Models and Processes of Reading. The papers cover the linguistic, perceptual, and cognitive components involved in reading. The models attempt to integrate the variables that influence the perception, recognition,…
A Theoretical Model of Intrapersonal Agenda.
ERIC Educational Resources Information Center
Yang, Jian
Prior research has shown that the media play an agenda-setting role in political campaigns. A theoretical model was developed to investigate intrapersonal agenda's relationship with certain contingent factors. To test the model a study of the intrapersonal agenda (personally perceived salience of public issues) was then conducted as part of the…
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
On the importance of having accurate data for astrophysical modelling
NASA Astrophysics Data System (ADS)
Lique, Francois
2016-06-01
The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.
Theoretical Frameworks for Multiscale Modeling and Simulation
Zhou, Huan-Xiang
2014-01-01
Biomolecular systems have been modeled at a variety of scales, ranging from explicit treatment of electrons and nuclei to continuum description of bulk deformation or velocity. Many challenges of interfacing between scales have been overcome. Multiple models at different scales have been used to study the same system or calculate the same property (e.g., channel conductance). Accurate modeling of biochemical processes under in vivo conditions and the bridging of molecular and subcellular scales will likely soon become reality. PMID:24492203
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Confronting theoretical models with CANDELS observations
NASA Astrophysics Data System (ADS)
Lu, Yu; CANDELS Collaboration
2014-01-01
Current galaxy formation models contain large uncertainties in modeling gas accretion, star formation and feedback processes. These uncertainties can only be constrained by comprehensive and careful model-data comparisons. Three independently developed semi-analytic galaxy formation models are adopted to make predictions for CANDELS observations. A comparison study involving the three different models reveals both common features shared by the models and discrepancies between the models. The similarities in the predicted stellar mass functions indicate strong degeneracies between the models, which can only be broken by accurate measurements of the stellar mass functions at multiple redshifts. On the other hand, the models show large discrepancies in their predicted star formation histories and metallicity-stellar mass relations. These discrepancies stem from the uncertainties in modeling gas accretion and galactic outflow powered by feedback. The model comparisons suggest that, other than directly constraining inflow and outflow in observation, more accurate observational measurements for stellar mass, star formation rate and metallicity of galaxies in a large range of cosmic epoch will discriminate between models. Our study involving multiple models and exploration of the high-dimensional parameter space demonstrates that analysis of the full CANDELS dataset, including a self-consistent treatment of star formation rates, stellar masses, galaxy sizes, metallicity relations and their evolution across a broad redshift range, is likely to significantly tighten the data constraints and shed light on understanding the physics governing galaxy formation.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Hybrid quantum teleportation: A theoretical model
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira
2014-12-04
Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.
Hybrid quantum teleportation: A theoretical model
NASA Astrophysics Data System (ADS)
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira
2014-12-01
Hybrid quantum teleportation - continuous-variable teleportation of qubits - is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter's classical channel.
Accurate Low-mass Stellar Models of KOI-126
NASA Astrophysics Data System (ADS)
Feiden, Gregory A.; Chaboyer, Brian; Dotter, Aaron
2011-10-01
The recent discovery of an eclipsing hierarchical triple system with two low-mass stars in a close orbit (KOI-126) by Carter et al. appeared to reinforce the evidence that theoretical stellar evolution models are not able to reproduce the observational mass-radius relation for low-mass stars. We present a set of stellar models for the three stars in the KOI-126 system that show excellent agreement with the observed radii. This agreement appears to be due to the equation of state implemented by our code. A significant dispersion in the observed mass-radius relation for fully convective stars is demonstrated; indicative of the influence of physics currently not incorporated in standard stellar evolution models. We also predict apsidal motion constants for the two M dwarf companions. These values should be observationally determined to within 1% by the end of the Kepler mission.
Theoretical models of neural circuit development.
Simpson, Hugh D; Mortimer, Duncan; Goodhill, Geoffrey J
2009-01-01
Proper wiring up of the nervous system is critical to the development of organisms capable of complex and adaptable behaviors. Besides the many experimental advances in determining the cellular and molecular machinery that carries out this remarkable task precisely and robustly, theoretical approaches have also proven to be useful tools in analyzing this machinery. A quantitative understanding of these processes can allow us to make predictions, test hypotheses, and appraise established concepts in a new light. Three areas that have been fruitful in this regard are axon guidance, retinotectal mapping, and activity-dependent development. This chapter reviews some of the contributions made by mathematical modeling in these areas, illustrated by important examples of models in each section. For axon guidance, we discuss models of how growth cones respond to their environment, and how this environment can place constraints on growth cone behavior. Retinotectal mapping looks at computational models for how topography can be generated in populations of neurons based on molecular gradients and other mechanisms such as competition. In activity-dependent development, we discuss theoretical approaches largely based on Hebbian synaptic plasticity rules, and how they can generate maps in the visual cortex very similar to those seen in vivo. We show how theoretical approaches have substantially contributed to the advancement of developmental neuroscience, and discuss future directions for mathematical modeling in the field. PMID:19427515
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Simple theoretical models for composite rotor blades
NASA Technical Reports Server (NTRS)
Valisetty, R. R.; Rehfield, L. W.
1984-01-01
The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.
Chewing simulation with a physically accurate deformable model.
Pascale, Andra Maria; Ruge, Sebastian; Hauth, Steffen; Kordaß, Bernd; Linsen, Lars
2015-01-01
Nowadays, CAD/CAM software is being used to compute the optimal shape and position of a new tooth model meant for a patient. With this possible future application in mind, we present in this article an independent and stand-alone interactive application that simulates the human chewing process and the deformation it produces in the food substrate. Chewing motion sensors are used to produce an accurate representation of the jaw movement. The substrate is represented by a deformable elastic model based on the finite linear elements method, which preserves physical accuracy. Collision detection based on spatial partitioning is used to calculate the forces that are acting on the deformable model. Based on the calculated information, geometry elements are added to the scene to enhance the information available for the user. The goal of the simulation is to present a complete scene to the dentist, highlighting the points where the teeth came into contact with the substrate and giving information about how much force acted at these points, which therefore makes it possible to indicate whether the tooth is being used incorrectly in the mastication process. Real-time interactivity is desired and achieved within limits, depending on the complexity of the employed geometric models. The presented simulation is a first step towards the overall project goal of interactively optimizing tooth position and shape under the investigation of a virtual chewing process using real patient data (Fig 1). PMID:26389135
Theoretical modeling for the stereo mission
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Burlaga, L. F.; Kaiser, M. L.; Ng, C. K.; Reames, D. V.; Reiner, M. J.; Gombosi, T. I.; Lugaz, N.; Manchester, W.; Roussev, I. I.; Zurbuchen, T. H.; Farrugia, C. J.; Galvin, A. B.; Lee, M. A.; Linker, J. A.; Mikić, Z.; Riley, P.; Alexander, D.; Sandman, A. W.; Cook, J. W.; Howard, R. A.; Odstrčil, D.; Pizzo, V. J.; Kóta, J.; Liewer, P. C.; Luhmann, J. G.; Inhester, B.; Schwenn, R. W.; Solanki, S. K.; Vasyliunas, V. M.; Wiegelmann, T.; Blush, L.; Bochsler, P.; Cairns, I. H.; Robinson, P. A.; Bothmer, V.; Kecskemety, K.; Llebaria, A.; Maksimovic, M.; Scholer, M.; Wimmer-Schweingruber, R. F.
2008-04-01
We summarize the theory and modeling efforts for the STEREO mission, which will be used to interpret the data of both the remote-sensing (SECCHI, SWAVES) and in-situ instruments (IMPACT, PLASTIC). The modeling includes the coronal plasma, in both open and closed magnetic structures, and the solar wind and its expansion outwards from the Sun, which defines the heliosphere. Particular emphasis is given to modeling of dynamic phenomena associated with the initiation and propagation of coronal mass ejections (CMEs). The modeling of the CME initiation includes magnetic shearing, kink instability, filament eruption, and magnetic reconnection in the flaring lower corona. The modeling of CME propagation entails interplanetary shocks, interplanetary particle beams, solar energetic particles (SEPs), geoeffective connections, and space weather. This review describes mostly existing models of groups that have committed their work to the STEREO mission, but is by no means exhaustive or comprehensive regarding alternative theoretical approaches.
Accurate, low-cost 3D-models of gullies
NASA Astrophysics Data System (ADS)
Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine
2015-04-01
Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we
Towards Accurate Molecular Modeling of Plastic Bonded Explosives
NASA Astrophysics Data System (ADS)
Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.
2010-03-01
There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.
Towards accurate observation and modelling of Antarctic glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
King, M.
2012-04-01
The response of the solid Earth to glacial mass changes, known as glacial isostatic adjustment (GIA), has received renewed attention in the recent decade thanks to the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE measures Earth's gravity field every 30 days, but cannot partition surface mass changes, such as present-day cryospheric or hydrological change, from changes within the solid Earth, notably due to GIA. If GIA cannot be accurately modelled in a particular region the accuracy of GRACE estimates of ice mass balance for that region is compromised. This lecture will focus on Antarctica, where models of GIA are hugely uncertain due to weak constraints on ice loading history and Earth structure. Over the last years, however, there has been a step-change in our ability to measure GIA uplift with the Global Positioning System (GPS), including widespread deployments of permanent GPS receivers as part of the International Polar Year (IPY) POLENET project. I will particularly focus on the Antarctic GPS velocity field and the confounding effect of elastic rebound due to present-day ice mass changes, and then describe the construction and calibration of a new Antarctic GIA model for application to GRACE data, as well as highlighting areas where further critical developments are required.
Theoretical models for polarimetric radar clutter
NASA Technical Reports Server (NTRS)
Borgeaud, M.; Shin, R. T.; Kong, J. A.
1987-01-01
The Mueller matrix and polarization covariance matrix are described for polarimetric radar systems. The clutter is modeled by a layer of random permittivity, described by a three-dimensional correlation function, with variance, and horizontal and vertical correlation lengths. This model is applied, using the wave theory with Born approximations carried to the second order, to find the backscattering elements of the polarimetric matrices. It is found that 8 out of 16 elements of the Mueller matrix are identically zero, corresponding to a covariance matrix with four zero elements. Theoretical predictions are matched with experimental data for vegetation fields.
Theoretical model of Saturn's kilometric radiation spectrum
NASA Astrophysics Data System (ADS)
Galopeau, P.; Zarka, P.; Le Queau, D.
1989-07-01
A model was developed, which allowed the theoretical derivation of an envelope for the average spectrum of the Saturnian kilometric radiation (SKR), assuming that the SKR is generated by the cyclotron maser instability. The theoretical SKR spectrum derived was found to exhibit the same spectral features as the observed mean spectra. Namely, the overall shape of both calculated and measured spectra are similar, with the fluxes peaking at frequencies of 100,000 Hz and decreasing abruptly at high frequencies, and more slowly at lower frequencies. The calculated spectral intensity levels exceed the most intense observed intensities by up to 1 order of magnitude, suggesting that the SKR emission is only marginally saturated by nonlinear processes.
An accurate and simple quantum model for liquid water.
Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A
2006-11-14
The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics
Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.
Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong
2015-09-01
The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.
Theoretical Modeling of Prion Disease Incubation
Kulkarni, R. V.; Slepoy, A.; Singh, R. R. P.; Cox, D. L.; Pázmándi, F.
2003-01-01
We apply a theoretical aggregation model to laboratory and epidemiological prion disease incubation time data. In our model, slow growth of misfolded protein aggregates from small initial seeds controls the latent or lag phase; aggregate fissioning and subsequent spreading leads to an exponential growth phase. Our model accounts for the striking reproducibility of incubation times for high dose inoculation of lab animals. In particular, low dose yields broad incubation time distributions, and increasing dose narrows distributions and yields sharply defined onset times. We also explore how incubation time statistics depend upon aggregate morphology. We apply our model to fit the experimental dose-incubation curves for distinct strains of scrapie, and explain logarithmic variation at high dose and deviations from logarithmic behavior at low dose. We use this to make testable predictions for infectivity time-course experiments. PMID:12885622
Theoretical models for supercritical fluid extraction.
Huang, Zhen; Shi, Xiao-Han; Jiang, Wei-Juan
2012-08-10
For the proper design of supercritical fluid extraction processes, it is essential to have a sound knowledge of the mass transfer mechanism of the extraction process and the appropriate mathematical representation. In this paper, the advances and applications of kinetic models for describing supercritical fluid extraction from various solid matrices have been presented. The theoretical models overviewed here include the hot ball diffusion, broken and intact cell, shrinking core and some relatively simple models. Mathematical representations of these models have been in detail interpreted as well as their assumptions, parameter identifications and application examples. Extraction process of the analyte solute from the solid matrix by means of supercritical fluid includes the dissolution of the analyte from the solid, the analyte diffusion in the matrix and its transport to the bulk supercritical fluid. Mechanisms involved in a mass transfer model are discussed in terms of external mass transfer resistance, internal mass transfer resistance, solute-solid interactions and axial dispersion. The correlations of the external mass transfer coefficient and axial dispersion coefficient with certain dimensionless numbers are also discussed. Among these models, the broken and intact cell model seems to be the most relevant mathematical model as it is able to provide realistic description of the plant material structure for better understanding the mass-transfer kinetics and thus it has been widely employed for modeling supercritical fluid extraction of natural matters. PMID:22560346
A Theoretical Model of Water and Trade
NASA Astrophysics Data System (ADS)
Dang, Q.; Konar, M.; Reimer, J.; Di Baldassarre, G.; Lin, X.; Zeng, R.
2015-12-01
Water is an essential factor of agricultural production. Agriculture, in turn, is globalized through the trade of food commodities. In this paper, we develop a theoretical model of a small open economy that explicitly incorporates water resources. The model emphasizes three tradeoffs involving water decision-making that are important yet not always considered within the existing literature. One tradeoff focuses on competition for water among different sectors when there is a shock to one of the sectors only, such as trade liberalization and consequent higher demand for the product. A second tradeoff concerns the possibility that there may or may not be substitutes for water, such as increased use of sophisticated irrigation technology as a means to increase crop output in the absence of higher water availability. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using products. A number of propositions are proven. For example, while trade liberalization tends to increase water use, increased pressure on water supplies can be moderated by way of a tax that is derivable with observable economic phenomena. Another example is that increased riskiness of water availability tends to cause water users to use less water than would be the case under profit maximization. These theoretical model results generate hypotheses that can be tested empirically in future work.
A theoretical model of water and trade
NASA Astrophysics Data System (ADS)
Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie
2016-03-01
Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.
Theoretical Models of the Galactic Bulge
NASA Astrophysics Data System (ADS)
Shen, Juntai; Li, Zhao-Yu
Near infrared images from the COBE satellite presented the first clear evidence that our Milky Way galaxy contains a boxy shaped bulge. Recent years have witnessed a gradual paradigm shift in the formation and evolution of the Galactic bulge. Bulges were commonly believed to form in the dynamical violence of galaxy mergers. However, it has become increasingly clear that the main body of the Milky Way bulge is not a classical bulge made by previous major mergers, instead it appears to be a bar seen somewhat end-on. The Milky Way bar can form naturally from a precursor disc and thicken vertically by the internal firehose/buckling instability, giving rise to the boxy appearance. This picture is supported by many lines of evidence, including the asymmetric parallelogram shape, the strong cylindrical rotation (i.e., nearly constant rotation regardless of the height above the disc plane), the existence of an intriguing X-shaped structure in the bulge, and perhaps the metallicity gradients. We review the major theoretical models and techniques to understand the Milky Way bulge. Despite the progresses in recent theoretical attempts, a complete bulge formation model that explains the full kinematics and metallicity distribution is still not fully understood. Upcoming large surveys are expected to shed new light on the formation history of the Galactic bulge.
Electron microscopy and theoretical modeling of cochleates.
Nagarsekar, Kalpa; Ashtikar, Mukul; Thamm, Jana; Steiniger, Frank; Schacher, Felix; Fahr, Alfred; May, Sylvio
2014-11-11
Cochleates are self-assembled cylindrical condensates that consist of large rolled-up lipid bilayer sheets and represent a novel platform for oral and systemic delivery of therapeutically active medicinal agents. With few preceding investigations, the physical basis of cochleate formation has remained largely unexplored. We address the structure and stability of cochleates in a combined experimental/theoretical approach. Employing different electron microscopy methods, we provide evidence for cochleates consisting of phosphatidylserine and calcium to be hollow tubelike structures with a well-defined constant lamellar repeat distance and statistically varying inner and outer radii. To rationalize the relation between inner and outer radii, we propose a theoretical model. Based on the minimization of a phenomenological free energy expression containing a bending, adhesion, and frustration contribution, we predict the optimal tube dimensions of a cochleate and estimate ratios of material constants for cochleates consisting of phosphatidylserines with varied hydrocarbon chain structures. Knowing and understanding these ratios will ultimately benefit the successful formulation of cochleates for drug delivery applications.
Explaining Facial Imitation: A Theoretical Model
Meltzoff, Andrew N.; Moore, M. Keith
2013-01-01
A long-standing puzzle in developmental psychology is how infants imitate gestures they cannot see themselves perform (facial gestures). Two critical issues are: (a) the metric infants use to detect cross-modal equivalences in human acts and (b) the process by which they correct their imitative errors. We address these issues in a detailed model of the mechanisms underlying facial imitation. The model can be extended to encompass other types of imitation. The model capitalizes on three new theoretical concepts. First, organ identification is the means by which infants relate parts of their own bodies to corresponding ones of the adult’s. Second, body babbling (infants’ movement practice gained through self-generated activity) provides experience mapping movements to the resulting body configurations. Third, organ relations provide the metric by which infant and adult acts are perceived in commensurate terms. In imitating, infants attempt to match the organ relations they see exhibited by the adults with those they feel themselves make. We show how development restructures the meaning and function of early imitation. We argue that important aspects of later social cognition are rooted in the initial cross-modal equivalence between self and other found in newborns. PMID:24634574
A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em
2010-05-19
Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary.
A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics
Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em
2010-01-01
Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330
3ARM: A Fast, Accurate Radiative Transfer Model for Use in Climate Models
NASA Technical Reports Server (NTRS)
Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.
1996-01-01
A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.
3ARM: A Fast, Accurate Radiative Transfer Model for use in Climate Models
NASA Technical Reports Server (NTRS)
Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.
1996-01-01
A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.
Theoretical description of phase coexistence in model C60.
Costa, D; Pellicane, G; Caccamo, C; Schöll-Paschinger, E; Kahl, G
2003-08-01
We have investigated the phase diagram of a pair interaction model of C60 fullerene [L. A. Girifalco, J. Phys. Chem. 96, 858 (1992)], in the framework provided by two integral equation theories of the liquid state, namely, the modified hypernetted chain (MHNC) implemented under a global thermodynamic consistency constraint, and the self-consistent Ornstein-Zernike approximation (SCOZA), and by a perturbation theory (PT) with various degrees of refinement, for the free energy of the solid phase. We present an extended assessment of such theories as set against a recent Monte Carlo study of the same model [D. Costa, G. Pellicane, C. Caccamo, and M. C. Abramo, J. Chem. Phys. 118, 304 (2003)]. We have compared the theoretical predictions with the corresponding simulation results for several thermodynamic properties such as the free energy, the pressure, and the internal energy. Then we have determined the phase diagram of the model, by using either the SCOZA, the MHNC, or the PT predictions for one of the coexisting phases, and the simulation data for the other phase, in order to separately ascertain the accuracy of each theory. It turns out that the overall appearance of the phase portrait is reproduced fairly well by all theories, with remarkable accuracy as for the melting line and the solid-vapor equilibrium. All theories show a more or less pronounced discrepancy with the simulated fluid-solid coexistence pressure, above the triple point. The MHNC and SCOZA results for the liquid-vapor coexistence, as well as for the corresponding critical points, are quite accurate; the SCOZA tends to underestimate the density corresponding to the freezing line. All results are discussed in terms of the basic assumptions underlying each theory. We have then selected the MHNC for the fluid and the first-order PT for the solid phase, as the most accurate tools to investigate the phase behavior of the model in terms of purely theoretical approaches. It emerges that the use of
Information-Theoretic Perspectives on Geophysical Models
NASA Astrophysics Data System (ADS)
Nearing, Grey
2016-04-01
practice of science (except by Gong et al., 2013, whose fundamental insight is the basis for this talk), and here I offer two examples of practical methods that scientists might use to approximately measure ontological information. I place this practical discussion in the context of several recent and high-profile experiments that have found that simple out-of-sample statistical models typically (vastly) outperform our most sophisticated terrestrial hydrology models. I offer some perspective on several open questions about how to use these findings to improve our models and understanding of these systems. Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological modeling', Water Resources Research, 47(9). Cover, T. M. and Thomas, J. A. (1991) Elements of Information Theory. New York, NY: Wiley-Interscience. Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. Csiszár, I. (1972) 'A Class of Measures of Informativity of Observation Channels', Periodica Mathematica Hungarica, 2(1), pp. 191-213. Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of information, pp. 61-70. Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-2273. Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources Research, 51(1), pp. 524-538. Popper, K. R. (2002) The Logic of Scientific Discovery. New York: Routledge. Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to cox's theorem
NASA Astrophysics Data System (ADS)
Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.
2016-06-01
We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies.
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies. PMID:25502920
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
Models in biology: 'accurate descriptions of our pathetic thinking'.
Gunawardena, Jeremy
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as 'predictive', in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
Clarifying types of uncertainty: when are models accurate, and uncertainties small?
Cox, Louis Anthony Tony
2011-10-01
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.
Accurate Model Selection of Relaxed Molecular Clocks in Bayesian Phylogenetics
Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J.; Suchard, Marc A.; Lemey, Philippe
2013-01-01
Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike’s information criterion through Markov chain Monte Carlo (AICM), in Bayesian model selection of demographic and molecular clock models. Almost simultaneously, a Bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets. PMID:23090976
Assessing a Theoretical Model on EFL College Students
ERIC Educational Resources Information Center
Chang, Yu-Ping
2011-01-01
This study aimed to (1) integrate relevant language learning models and theories, (2) construct a theoretical model of college students' English learning performance, and (3) assess the model fit between empirically observed data and the theoretical model proposed by the researchers of this study. Subjects of this study were 1,129 Taiwanese EFL…
How prayer heals: a theoretical model.
Levin, J S
1996-01-01
This article presents a theoretical model that outlines various possible explanations for the healing effects of prayer. Four classes of mechanisms are defined on the basis of whether healing has naturalistic or supernatural origins and whether it operates locally or nonlocally. Through this framework, most of the currently proposed hypotheses for understanding absent healing and other related phenomena-hypotheses that invoke such concepts as subtle energy, psi, consciousness, morphic fields, and extended mind-are shown to be no less naturalistic than the Newtonian, mechanistic forces of allopathic biomedicine so often derided for their materialism. In proposing that prayer may heal through nonlocal means according to mechanisms and theories proposed by the new physics, Dossey is almost alone among medical scholars in suggesting the possible limitations and inadequacies of hypotheses based on energies, forces, and fields. Yet even such nonlocal effects can be conceived of as naturalistic; that is, they are explained by physical laws that may be unbelievable or unfamiliar to most physicians but that are nonetheless becoming recognized as operant laws of the natural universe. The concept of the supernatural, however, is something altogether different, and is, by definition, outside of or beyond nature. Herein may reside an either wholly or partly transcendent Creator-God who is believed by many to heal through means that transcend the laws of the created universe, both its local and nonlocal elements, and that are thus inherently inaccessible to and unknowable by science. Such an explanation for the effects of prayer merits consideration and, despite its unprovability by medical science, should not be dismissed out of hand.
How prayer heals: a theoretical model.
Levin, J S
1996-01-01
This article presents a theoretical model that outlines various possible explanations for the healing effects of prayer. Four classes of mechanisms are defined on the basis of whether healing has naturalistic or supernatural origins and whether it operates locally or nonlocally. Through this framework, most of the currently proposed hypotheses for understanding absent healing and other related phenomena-hypotheses that invoke such concepts as subtle energy, psi, consciousness, morphic fields, and extended mind-are shown to be no less naturalistic than the Newtonian, mechanistic forces of allopathic biomedicine so often derided for their materialism. In proposing that prayer may heal through nonlocal means according to mechanisms and theories proposed by the new physics, Dossey is almost alone among medical scholars in suggesting the possible limitations and inadequacies of hypotheses based on energies, forces, and fields. Yet even such nonlocal effects can be conceived of as naturalistic; that is, they are explained by physical laws that may be unbelievable or unfamiliar to most physicians but that are nonetheless becoming recognized as operant laws of the natural universe. The concept of the supernatural, however, is something altogether different, and is, by definition, outside of or beyond nature. Herein may reside an either wholly or partly transcendent Creator-God who is believed by many to heal through means that transcend the laws of the created universe, both its local and nonlocal elements, and that are thus inherently inaccessible to and unknowable by science. Such an explanation for the effects of prayer merits consideration and, despite its unprovability by medical science, should not be dismissed out of hand. PMID:8795874
Theoretical and numerical study of axisymmetric lattice Boltzmann models
NASA Astrophysics Data System (ADS)
Huang, Haibo; Lu, Xi-Yun
2009-07-01
The forcing term in the lattice Boltzmann equation (LBE) is usually used to mimic Navier-Stokes equations with a body force. To derive axisymmetric model, forcing terms are incorporated into the two-dimensional (2D) LBE to mimic the additional axisymmetric contributions in 2D Navier-Stokes equations in cylindrical coordinates. Many axisymmetric lattice Boltzmann D2Q9 models were obtained through the Chapman-Enskog expansion to recover the 2D Navier-Stokes equations in cylindrical coordinates [I. Halliday , Phys. Rev. E 64, 011208 (2001); K. N. Premnath and J. Abraham, Phys. Rev. E 71, 056706 (2005); T. S. Lee, H. Huang, and C. Shu, Int. J. Mod. Phys. C 17, 645 (2006); T. Reis and T. N. Phillips, Phys. Rev. E 75, 056703 (2007); J. G. Zhou, Phys. Rev. E 78, 036701 (2008)]. The theoretical differences between them are discussed in detail. Numerical studies were also carried out by simulating two different flows to make a comparison on these models’ accuracy and τ sensitivity. It is found all these models are able to obtain accurate results and have the second-order spatial accuracy. However, the model C [J. G. Zhou, Phys. Rev. E 78, 036701 (2008)] is the most stable one in terms of τ sensitivity. It is also found that if density of fluid is defined in its usual way and not directly relevant to source terms, the lattice Boltzmann model seems more stable.
Theoretical and computer models of detonation in solid explosives
Tarver, C.M.; Urtiew, P.A.
1997-10-01
Recent experimental and theoretical advances in understanding energy transfer and chemical kinetics have led to improved models of detonation waves in solid explosives. The Nonequilibrium Zeldovich - von Neumann - Doring (NEZND) model is supported by picosecond laser experiments and molecular dynamics simulations of the multiphonon up-pumping and internal vibrational energy redistribution (IVR) processes by which the unreacted explosive molecules are excited to the transition state(s) preceding reaction behind the leading shock front(s). High temperature, high density transition state theory calculates the induction times measured by laser interferometric techniques. Exothermic chain reactions form product gases in highly excited vibrational states, which have been demonstrated to rapidly equilibrate via supercollisions. Embedded gauge and Fabry-Perot techniques measure the rates of reaction product expansion as thermal and chemical equilibrium is approached. Detonation reaction zone lengths in carbon-rich condensed phase explosives depend on the relatively slow formation of solid graphite or diamond. The Ignition and Growth reactive flow model based on pressure dependent reaction rates and Jones-Wilkins-Lee (JWL) equations of state has reproduced this nanosecond time resolved experimental data and thus has yielded accurate average reaction zone descriptions in one-, two- and three- dimensional hydrodynamic code calculations. The next generation reactive flow model requires improved equations of state and temperature dependent chemical kinetics. Such a model is being developed for the ALE3D hydrodynamic code, in which heat transfer and Arrhenius kinetics are intimately linked to the hydrodynamics.
Towards an Accurate Performance Modeling of Parallel SparseFactorization
Grigori, Laura; Li, Xiaoye S.
2006-05-26
We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.
Inflation model building with an accurate measure of e -folding
NASA Astrophysics Data System (ADS)
Chongchitnan, Sirichai
2016-08-01
It has become standard practice to take the logarithmic growth of the scale factor as a measure of the amount of inflation, despite the well-known fact that this is only an approximation for the true amount of inflation required to solve the horizon and flatness problems. The aim of this work is to show how this approximation can be completely avoided using an alternative framework for inflation model building. We show that using the inverse Hubble radius, H =a H , as the key dynamical parameter, the correct number of e -folding arises naturally as a measure of inflation. As an application, we present an interesting model in which the entire inflationary dynamics can be solved analytically and exactly, and, in special cases, reduces to the familiar class of power-law models.
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Magnetic field models of nine CP stars from "accurate" measurements
NASA Astrophysics Data System (ADS)
Glagolevskij, Yu. V.
2013-01-01
The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.
Accurate first principles model potentials for intermolecular interactions.
Gordon, Mark S; Smith, Quentin A; Xu, Peng; Slipchenko, Lyudmila V
2013-01-01
The general effective fragment potential (EFP) method provides model potentials for any molecule that is derived from first principles, with no empirically fitted parameters. The EFP method has been interfaced with most currently used ab initio single-reference and multireference quantum mechanics (QM) methods, ranging from Hartree-Fock and coupled cluster theory to multireference perturbation theory. The most recent innovations in the EFP model have been to make the computationally expensive charge transfer term much more efficient and to interface the general EFP dispersion and exchange repulsion interactions with QM methods. Following a summary of the method and its implementation in generally available computer programs, these most recent new developments are discussed.
Simulation model accurately estimates total dietary iodine intake.
Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C
2009-07-01
One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (<5%) were at risk of intakes that were too low. In the scenario of a potential future situation using lower salt iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.
An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion
NASA Astrophysics Data System (ADS)
Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.
2014-11-01
Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.
Accurate numerical solutions for elastic-plastic models. [LMFBR
Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.
1980-03-01
The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.
Expanding Panjabi's stability model to express movement: a theoretical model.
Hoffman, J; Gabel, P
2013-06-01
Novel theoretical models of movement have historically inspired the creation of new methods for the application of human movement. The landmark theoretical model of spinal stability by Panjabi in 1992 led to the creation of an exercise approach to spinal stability. This approach however was later challenged, most significantly due to a lack of favourable clinical effect. The concepts explored in this paper address and consider the deficiencies of Panjabi's model then propose an evolution and expansion from a special model of stability to a general one of movement. It is proposed that two body-wide symbiotic elements are present within all movement systems, stability and mobility. The justification for this is derived from the observable clinical environment. It is clinically recognised that these two elements are present and identifiable throughout the body in different joints and muscles, and the neural conduction system. In order to generalise the Panjabi model of stability to include and illustrate movement, a matching parallel mobility system with the same subsystems was conceptually created. In this expanded theoretical model, the new mobility system is placed beside the existing stability system and subsystems. The ability of both stability and mobility systems to work in harmony will subsequently determine the quality of movement. Conversely, malfunction of either system, or their subsystems, will deleteriously affect all other subsystems and consequently overall movement quality. For this reason, in the rehabilitation exercise environment, focus should be placed on the simultaneous involvement of both the stability and mobility systems. It is suggested that the individual's relevant functional harmonious movements should be challenged at the highest possible level without pain or discomfort. It is anticipated that this conceptual expansion of the theoretical model of stability to one with the symbiotic inclusion of mobility, will provide new understandings
A theoretical model of atmospheric ozone depletion
NASA Astrophysics Data System (ADS)
Midya, S. K.; Jana, P. K.; Lahiri, T.
1994-01-01
A critical study on different ozone depletion and formation processes has been made and following important results are obtained: (i) From analysis it is shown that O3 concentration will decrease very minutely with time for normal atmosphere when [O], [O2] and UV-radiation remain constant. (ii) An empirical equation is established theoretically between the variation of ozone concentration and time. (iii) Special ozone depletion processes are responsible for the dramatic decrease of O3-concentration at Antarctica.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.
2015-12-01
We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.
NASA Astrophysics Data System (ADS)
Nielsen, Jens; d'Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-01
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J
2014-01-01
The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.
Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J
2014-01-01
The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%. PMID:23733820
Empathy and Child Neglect: A Theoretical Model
ERIC Educational Resources Information Center
De Paul, Joaquin; Guibert, Maria
2008-01-01
Objective: To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect…
A theoretical model to study melting of metals under pressure
NASA Astrophysics Data System (ADS)
Kholiya, Kuldeep; Chandra, Jeewan
2015-10-01
On the basis of the thermal equation-of-state a simple theoretical model is developed to study the pressure dependence of melting temperature. The model is then applied to compute the high pressure melting curve of 10 metals (Cu, Mg, Pb, Al, In, Cd, Zn, Au, Ag and Mn). It is found that the melting temperature is not linear with pressure and the slope dTm/dP of the melting curve decreases continuously with the increase in pressure. The results obtained with the present model are also compared with the previous theoretical and experimental data. A good agreement between theoretical and experimental result supports the validity of the present model.
Information-Theoretic Perspectives on Geophysical Models
NASA Astrophysics Data System (ADS)
Nearing, Grey
2016-04-01
To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar
Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling
Danon, Yaron; Nazarewicz, Witold; Talou, Patrick
2013-02-18
This project addresses three important gaps in existing evaluated nuclear data libraries that represent a significant hindrance against highly advanced modeling and simulation capabilities for the Advanced Fuel Cycle Initiative (AFCI). This project will: Develop advanced theoretical tools to compute prompt fission neutrons and gamma-ray characteristics well beyond average spectra and multiplicity, and produce new evaluated files of U and Pu isotopes, along with some minor actinides; Perform state-of-the-art fission cross-section modeling and calculations using global and microscopic model input parameters, leading to truly predictive fission cross-sections capabilities. Consistent calculations for a suite of Pu isotopes will be performed; Implement innovative data assimilation tools, which will reflect the nuclear data evaluation process much more accurately, and lead to a new generation of uncertainty quantification files. New covariance matrices will be obtained for Pu isotopes and compared to existing ones. The deployment of a fleet of safe and efficient advanced reactors that minimize radiotoxic waste and are proliferation-resistant is a clear and ambitious goal of AFCI. While in the past the design, construction and operation of a reactor were supported through empirical trials, this new phase in nuclear energy production is expected to rely heavily on advanced modeling and simulation capabilities. To be truly successful, a program for advanced simulations of innovative reactors will have to develop advanced multi-physics capabilities, to be run on massively parallel super- computers, and to incorporate adequate and precise underlying physics. And all these areas have to be developed simultaneously to achieve those ambitious goals. Of particular interest are reliable fission cross-section uncertainty estimates (including important correlations) and evaluations of prompt fission neutrons and gamma-ray spectra and uncertainties.
Toward a Theoretical Model of Evaluation Utilization.
ERIC Educational Resources Information Center
Johnson, R. Burke
1998-01-01
A metamodel of evaluation utilization was developed from implicit and explicit process models and ideas developed in recent research. The model depicts evaluation use as occurring in an internal environment situated in an external environment. Background variables, international or social psychological variables, and evaluation use variables are…
A Detection-Theoretic Model of Echo Inhibition
ERIC Educational Resources Information Center
Saberi, Kourosh; Petrosyan, Agavni
2004-01-01
A detection-theoretic analysis of the auditory localization of dual-impulse stimuli is described, and a model for the processing of spatial cues in the echo pulse is developed. Although for over 50 years "echo suppression" has been the topic of intense theoretical and empirical study within the hearing sciences, only a rudimentary understanding of…
A Theoretical Framework for Physics Education Research: Modeling Student Thinking
ERIC Educational Resources Information Center
Redish, Edward F.
2004-01-01
Education is a goal-oriented field. But if we want to treat education scientifically so we can accumulate, evaluate, and refine what we learn, then we must develop a theoretical framework that is strongly rooted in objective observations and through which different theoretical models of student thinking can be compared. Much that is known in the…
Theoretical analysis and modeling for nanoelectronics
NASA Astrophysics Data System (ADS)
Baccarani, Giorgio; Gnani, Elena; Gnudi, Antonio; Reggiani, Susanna
2016-11-01
In this paper we review the evolution of Microelectronics and its transformation into Nanoelectronics, following the predictions of Moore's law, and some of the issues related with this evolution. Next, we discuss the requirements of device modeling and the solutions proposed throughout the years to address the physical effects related with an extreme device miniaturization, such as hot-electron effects, band splitting into multiple sub-bands, quasi-ballistic transport and electron tunneling. The most important physical models are shortly highlighted, and a few simulation results of heterojunction TFETs are reported and discussed.
Theoretical Model for Nanoporous Carbon Supercapacitors
Sumpter, Bobby G; Meunier, Vincent; Huang, Jingsong
2008-01-01
The unprecedented anomalous increase in capacitance of nanoporous carbon supercapacitors at pore sizes smaller than 1 nm [Science 2006, 313, 1760.] challenges the long-held presumption that pores smaller than the size of solvated electrolyte ions do not contribute to energy storage. We propose a heuristic model to replace the commonly used model for an electric double-layer capacitor (EDLC) on the basis of an electric double-cylinder capacitor (EDCC) for mesopores (2 {50 nm pore size), which becomes an electric wire-in-cylinder capacitor (EWCC) for micropores (< 2 nm pore size). Our analysis of the available experimental data in the micropore regime is confirmed by 1st principles density functional theory calculations and reveals significant curvature effects for carbon capacitance. The EDCC (and/or EWCC) model allows the supercapacitor properties to be correlated with pore size, specific surface area, Debye length, electrolyte concentration and dielectric constant, and solute ion size. The new model not only explains the experimental data, but also offers a practical direction for the optimization of the properties of carbon supercapacitors through experiments.
Theoretical Tinnitus Framework: A Neurofunctional Model.
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
Theoretical Tinnitus Framework: A Neurofunctional Model
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
Theoretical Tinnitus Framework: A Neurofunctional Model.
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
Theoretical Tinnitus Framework: A Neurofunctional Model
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2016-04-01
In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.
Accurate mask model implementation in OPC model for 14nm nodes and beyond
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2015-10-01
In a previous work [1] we demonstrated that current OPC model assuming the mask pattern to be analogous to the designed data is no longer valid. Indeed as depicted in figure 1, an extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason an accurate mask model, for a 14nm logic gate level has been calibrated. A model with a total RMS of 1.38nm at mask level was obtained. 2D structures such as line-end shortening and corner rounding were well predicted using SEM pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular, as depicted in figure 2.
Theoretical Modelling of Synthetic Molecular Motors
NASA Astrophysics Data System (ADS)
Barbu, Corina; Sofo, Jorge; Crespi, Vincent
2004-03-01
Synthetic molecular motors with sizes of few nanometers offer prospects to control molecular-scale mechanical motion. Motors with electric dipoles designed into their structure can undergo conformational changes in response to an external electric field and thereby, in principle, perform mechanical work. The synthetic rotary motor of our interest consists of a molecular caltrop with a three-legged base for attachment to a substrate and a molecular shaft functionalized with a molecular rotor at the upper end. Both the static dipole and the electric field-induced dipole of the molecular rotor are relevant to producing rotation. Also, the combination of external electrostatic torque and the internal thermal fluctuations must be sufficient to overcome any rotational barriers on experimentally relevant timescales. Density functional theory calculations at the B3LYP/TZV level coupled to analytical modelling reveal the dynamical response of the motor.
Theoretical model for plasma opening switch
Baker, L.
1980-07-01
The theory of an explosive plasma switch is developed and compared with the experimental results of Pavlovskii and work at Sandia. A simple analytic model is developed, which predicts that such switches may achieve opening times of approximately 100 ns. When the switching time is limited by channel mixing it scales as t = C(m d/sub 0/)/sup 1/2/P/sub 0//sup 2/P/sub e//sup -5/2/ where m is the foil mass per unit area, d/sub 0/ the channel thickness and P/sub 0/ the channel pressure (at explosive breakout), P/sub e/ the explosive pressure, C a constant of order 10 for c.g.s. units. Thus faster switching times may be achieved by minimizing foil mass and channel pressure, or increasing explosive product pressure, with the scaling exponents as shown suggesting that changes in pressures would be more effective.
Dynamics in Higher Education Politics: A Theoretical Model
ERIC Educational Resources Information Center
Kauko, Jaakko
2013-01-01
This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…
MONA: An accurate two-phase well flow model based on phase slippage
Asheim, H.
1984-10-01
In two phase flow, holdup and pressure loss are related to interfacial slippage. A model based on the slippage concept has been developed and tested using production well data from Forties, the Ekofisk area, and flowline data from Prudhoe Bay. The model developed turned out considerably more accurate than the standard models used for comparison.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
Testing a Theoretical Model of Immigration Transition and Physical Activity.
Chang, Sun Ju; Im, Eun-Ok
2015-01-01
The purposes of the study were to develop a theoretical model to explain the relationships between immigration transition and midlife women's physical activity and test the relationships among the major variables of the model. A theoretical model, which was developed based on transitions theory and the midlife women's attitudes toward physical activity theory, consists of 4 major variables, including length of stay in the United States, country of birth, level of acculturation, and midlife women's physical activity. To test the theoretical model, a secondary analysis with data from 127 Hispanic women and 123 non-Hispanic (NH) Asian women in a national Internet study was used. Among the major variables of the model, length of stay in the United States was negatively associated with physical activity in Hispanic women. Level of acculturation in NH Asian women was positively correlated with women's physical activity. Country of birth and level of acculturation were significant factors that influenced physical activity in both Hispanic and NH Asian women. The findings support the theoretical model that was developed to examine relationships between immigration transition and physical activity; it shows that immigration transition can play an essential role in influencing health behaviors of immigrant populations in the United States. The NH theoretical model can be widely used in nursing practice and research that focus on immigrant women and their health behaviors. Health care providers need to consider the influences of immigration transition to promote immigrant women's physical activity. PMID:26502554
Testing a Theoretical Model of Immigration Transition and Physical Activity.
Chang, Sun Ju; Im, Eun-Ok
2015-01-01
The purposes of the study were to develop a theoretical model to explain the relationships between immigration transition and midlife women's physical activity and test the relationships among the major variables of the model. A theoretical model, which was developed based on transitions theory and the midlife women's attitudes toward physical activity theory, consists of 4 major variables, including length of stay in the United States, country of birth, level of acculturation, and midlife women's physical activity. To test the theoretical model, a secondary analysis with data from 127 Hispanic women and 123 non-Hispanic (NH) Asian women in a national Internet study was used. Among the major variables of the model, length of stay in the United States was negatively associated with physical activity in Hispanic women. Level of acculturation in NH Asian women was positively correlated with women's physical activity. Country of birth and level of acculturation were significant factors that influenced physical activity in both Hispanic and NH Asian women. The findings support the theoretical model that was developed to examine relationships between immigration transition and physical activity; it shows that immigration transition can play an essential role in influencing health behaviors of immigrant populations in the United States. The NH theoretical model can be widely used in nursing practice and research that focus on immigrant women and their health behaviors. Health care providers need to consider the influences of immigration transition to promote immigrant women's physical activity.
Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean
NASA Astrophysics Data System (ADS)
Phalippou, L.; Demeestere, F.
2011-12-01
The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Culture and Developmental Trajectories: A Discussion on Contemporary Theoretical Models
ERIC Educational Resources Information Center
de Carvalho, Rafael Vera Cruz; Seidl-de-Moura, Maria Lucia; Martins, Gabriela Dal Forno; Vieira, Mauro Luís
2014-01-01
This paper aims to describe, compare and discuss the theoretical models proposed by Patricia Greenfield, Çigdem Kagitçibasi and Heidi Keller. Their models have the common goal of understanding the developmental trajectories of self based on dimensions of autonomy and relatedness that are structured according to specific cultural and environmental…
Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation
ERIC Educational Resources Information Center
Schroeder, Sascha; Richter, Tobias; Hoever, Inga
2008-01-01
Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers
NASA Astrophysics Data System (ADS)
Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas
2016-10-01
A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement.
Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers
Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas
2016-01-01
A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870
Organizational Learning and Product Design Management: Towards a Theoretical Model.
ERIC Educational Resources Information Center
Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael
2003-01-01
Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…
Healing from Childhood Sexual Abuse: A Theoretical Model
ERIC Educational Resources Information Center
Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner
2011-01-01
Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…
The Theoretical Basis of the Effective School Improvement Model (ESI)
ERIC Educational Resources Information Center
Scheerens, Jaap; Demeuse, Marc
2005-01-01
This article describes the process of theoretical reflection that preceded the development and empirical verification of a model of "effective school improvement". The focus is on basic mechanisms that could be seen as underlying "getting things in motion" and change in education systems. Four mechanisms are distinguished: synoptic rational…
Built-in templates speed up process for making accurate models
NASA Technical Reports Server (NTRS)
1964-01-01
From accurate scale drawings of a model, photographic negatives of the cross sections are printed on thin sheets of aluminum. These cross-section images are cut out and mounted, and mahogany blocks placed between them. The wood can be worked down using the aluminum as a built-in template.
A Generalized Information Theoretical Model for Quantum Secret Sharing
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming
2016-07-01
An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.
A theoretical model for smoking prevention studies in preteen children.
McGahee, T W; Kemp, V; Tingen, M
2000-01-01
The age of the onset of smoking is on a continual decline, with the prime age of tobacco use initiation being 12-14 years. A weakness of the limited research conducted on smoking prevention programs designed for preteen children (ages 10-12) is a well-defined theoretical basis. A theoretical perspective is needed in order to make a meaningful transition from empirical analysis to application of knowledge. Bandura's Social Cognitive Theory (1977, 1986), the Theory of Reasoned Action (Ajzen & Fishbein, 1980), and other literature linking various concepts to smoking behaviors in preteens were used to develop a model that may be useful for smoking prevention studies in preteen children.
Theoretical modelling of the feedback stabilization of external MHD modes in toroidal geometry
NASA Astrophysics Data System (ADS)
Chance, M. S.; Chu, M. S.; Okabayashi, M.; Turnbull, A. D.
2002-03-01
A theoretical framework for understanding the feedback mechanism for stabilization of external MHD modes has been formulated. Efficient computational tools - the GATO stability code coupled with a substantially modified VACUUM code - have been developed to effectively design viable feedback systems against these modes. The analysis assumed a thin resistive shell and a feedback coil structure accurately modelled in θ and phi, albeit with only a single harmonic variation in phi. Time constants and induced currents in the enclosing resistive shell are calculated. An optimized configuration based on an idealized model has been computed for the DIII-D device. Up to 90% of the effectiveness of an ideal wall can be achieved.
Accurate characterization and modeling of transmission lines for GaAs MMIC's
NASA Astrophysics Data System (ADS)
Finlay, Hugh J.; Jansen, Rolf H.; Jenkins, John A.; Eddison, Ian G.
1988-06-01
The authors discuss computer-aided design (CAD) tools together with high-accuracy microwave measurements to realize improved design data for GaAs monolithic microwave integrated circuits (MMICs). In particular, a combined theoretical and experimental approach to the generation of an accurate design database for transmission lines on GaAs MMICs is presented. The theoretical approach is based on an improved transmission-line theory which is part of the spectral-domain hybrid-mode computer program MCLINE. The benefit of this approach in the design of multidielectric-media transmission lines is described. The program was designed to include loss mechanisms in all dielectric layers and to include conductor and surface roughness loss contributions. As an example, using GaAs ring resonator techniques covering 2 to 24 GHz, accuracies in effective dielectric constant and loss of 1 percent and 15 percent respectively, are presented. By combining theoretical and experimental techniques, a generalized MMIC microstrip design database is outlined.
Electromechanical properties of smart aggregate: theoretical modeling and experimental validation
NASA Astrophysics Data System (ADS)
Wang, Jianjun; Kong, Qingzhao; Shi, Zhifei; Song, Gangbing
2016-09-01
Smart aggregate (SA), as a piezoceramic-based multi-functional device, is formed by sandwiching two lead zirconate titanate (PZT) patches with copper shielding between a pair of solid-machined cylindrical marble blocks with epoxy. Previous researches have successfully demonstrated the capability and reliability of versatile SAs to monitor the structural health of concrete structures. However, the previous works concentrated mainly on the applications of SAs in structural health monitoring; no reasonable theoretical model of SAs was proposed. In this paper, electromechanical properties of SAs were investigated using a proposed theoretical model. Based on one dimensional linear theory of piezo-elasticity, the dynamic solutions of a SA subjected to an external harmonic voltage were solved. Further, the electric impedance of the SA was computed, and the resonance and anti-resonance frequencies were calculated based on derived equations. Numerical analysis was conducted to discuss the effects of the thickness of epoxy layer and the dimension of PZT patch on the fundamental resonance and anti-resonance frequencies as well as the corresponding electromechanical coupling factor. The dynamic solutions based on the proposed theoretical model were further experimentally verified with two SA samples. The fundamental resonance and anti-resonance frequencies of SAs show good agreements in both theoretical and experimental results. The presented analysis and results contribute to the overall understanding of SA properties and help to optimize the working frequencies of SAs in structural health monitoring of civil structures.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Reynolds, Andrew M; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.
NASA Astrophysics Data System (ADS)
Pretzsch, Gunter
A theoretical model to determine the neutron detection efficiency of organic solid state nuclear track detectors without external radiator is described. The model involves the following calculation steps: production of heavy charged particles within the detector volume, characterization of the charged particles by appropriate physical quantities, application of suitable registration criteria, formation of etch pits. The etch pits formed are described by means of a distribution function which is doubly differential in both diameter and depth of the etch pits. The distribution function serves as the input value for the calculation of the detection efficiency. The detection efficiency is defined as the measured effect per neutron fluence. Hence it depends on the evaluation technique considered. The calculation of the distribution function is carried out for cellulose triacetate. The determination of the concrete detection efficiency using the light microscope and light transmission measurements as the evaluation technique will be described in further publications.
Theoretical model of infrared radiation of dressed human body indoors
NASA Astrophysics Data System (ADS)
Xiong, Zonglong; Yang, Kuntao
2008-02-01
The human body detecting by infrared thermography plays an important role in the field of medical treatment, scout and rescuing work after disaster occuring. The infrared image theoretical model is a foundation for a human body detecting because it can improve the ability and efficiency. The essence and significance of the information on the temperature field of the human body in indoor environment is systematically discussed on the basis of physical structure and thermoregulation system. The various factors that influence the body temperature are analyzed, then the method for the calculation of temperature distribution of the surface temperature is introduced. On the basis of the infrared radiation theory, a theoretical model is proposed to calculate the radiant flux intensity of the human body. This model can be applied to many fields.
Theoretical modeling of critical temperature increase in metamaterial superconductors
NASA Astrophysics Data System (ADS)
Smolyaninov, Igor I.; Smolyaninova, Vera N.
2016-05-01
Recent experiments have demonstrated that the metamaterial approach is capable of a drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al -A l2O3 ENZ core-shell metamaterials. Here, we perform theoretical modeling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modeling and experimental results in both aluminum- and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium-, Mg B2- , and H2S -based metamaterial superconductors is evaluated. The Mg B2 -based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of a H2S -based metamaterial Tc appears to reach ˜250 K.
A Knowledge Based Expert System to Aid Theoretical Ultrasonic Flaw Modelling
NASA Astrophysics Data System (ADS)
Robinson, Robert J.; McNab, Alistair
2005-04-01
This paper describes the culmination of three years work at the University of Strathclyde in developing an Expert System to aid theoretical flaw modelling. The Expert System utilises four validated models to simulate flaw modelling scenarios. Under certain conditions the models may break down and produce flaw responses which cannot be considered accurate. Previously a suitably qualified NDT engineer would have to interpret these results and update the original flaw model simulation in order to produce valid results. This was a laborious process and was restricted to those persons who had an in-depth knowledge in the operation of the validated models. The Expert System is capable of interpreting these warning flags and updating the original simulation to produce a valid modelling scenario. This paper gives a brief outline of how the Expert System operates before comparing the response of the system to that of a suitable qualified NDT engineer for a number of defect scenarios.
Development of modified cable models to simulate accurate neuronal active behaviors.
Elbasiouny, Sherif M
2014-12-01
In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted.
Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments
Kuruganti, Phani Teja
2007-01-01
As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.
NASA Astrophysics Data System (ADS)
Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent
2013-11-01
The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.
Bai, O; Nakamura, M; Kanda, M; Nagamine, T; Shibasaki, H
2001-11-01
This study introduces a method for accurate identification of the waveform of the evoked potentials by decomposing the component responses. The decomposition was achieved by zero-pole modeling of the evoked potentials in the discrete cosine transform (DCT) domain. It was found that the DCT coefficients of a component response in the evoked potentials could be modeled sufficiently by a second order transfer function in the DCT domain. The decomposition of the component responses was approached by using partial expansion of the estimated model for the evoked potentials, and the effectiveness of the decomposition method was evaluated both qualitatively and quantitatively. Because of the overlap of the different component responses, the proposed method enables an accurate identification of the evoked potentials, which is useful for clinical and neurophysiological investigations.
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other. PMID:19229307
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z
2016-09-01
The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models.
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.
Ustinov, E. A.
2014-10-07
Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827
Accurate and efficient halo-based galaxy clustering modelling with simulations
NASA Astrophysics Data System (ADS)
Zheng, Zheng; Guo, Hong
2016-06-01
Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.
Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M
2014-12-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration.
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707
5D model for accurate representation and visualization of dynamic cardiac structures
NASA Astrophysics Data System (ADS)
Lin, Wei-te; Robb, Richard A.
2000-05-01
Accurate cardiac modeling is challenging due to the intricate structure and complex contraction patterns of myocardial tissues. Fast imaging techniques can provide 4D structural information acquired as a sequence of 3D images throughout the cardiac cycle. To mode. The beating heart, we created a physics-based surface model that deforms between successive time point in the cardiac cycle. 3D images of canine hearts were acquired during one complete cardiac cycle using the DSR and the EBCT. The left ventricle of the first time point is reconstructed as a triangular mesh. A mass-spring physics-based deformable mode,, which can expand and shrink with local contraction and stretching forces distributed in an anatomically accurate simulation of cardiac motion, is applied to the initial mesh and allows the initial mesh to deform to fit the left ventricle in successive time increments of the sequence. The resulting 4D model can be interactively transformed and displayed with associated regional electrical activity mapped onto anatomic surfaces, producing a 5D model, which faithfully exhibits regional cardiac contraction and relaxation patterns over the entire heart. The model faithfully represents structural changes throughout the cardiac cycle. Such models provide the framework for minimizing the number of time points required to usefully depict regional motion of myocardium and allow quantitative assessment of regional myocardial motion. The electrical activation mapping provides spatial and temporal correlation within the cardiac cycle. In procedures which as intra-cardiac catheter ablation, visualization of the dynamic model can be used to accurately localize the foci of myocardial arrhythmias and guide positioning of catheters for optimal ablation.
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.
A theoretical model for smoking prevention studies in preteen children.
McGahee, T W; Kemp, V; Tingen, M
2000-01-01
The age of the onset of smoking is on a continual decline, with the prime age of tobacco use initiation being 12-14 years. A weakness of the limited research conducted on smoking prevention programs designed for preteen children (ages 10-12) is a well-defined theoretical basis. A theoretical perspective is needed in order to make a meaningful transition from empirical analysis to application of knowledge. Bandura's Social Cognitive Theory (1977, 1986), the Theory of Reasoned Action (Ajzen & Fishbein, 1980), and other literature linking various concepts to smoking behaviors in preteens were used to develop a model that may be useful for smoking prevention studies in preteen children. PMID:12026266
Theoretical model for plasma expansion generated by hypervelocity impact
Ju, Yuanyuan; Zhang, Qingming Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei; Gong, Zizheng
2014-09-15
The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e}) ∝ v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.
A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids
Dai, Fu-Zhi; Zhou, Yanchun
2016-01-01
Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials. PMID:27604165
A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids
NASA Astrophysics Data System (ADS)
Dai, Fu-Zhi; Zhou, Yanchun
2016-09-01
Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials.
A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids.
Dai, Fu-Zhi; Zhou, Yanchun
2016-01-01
Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials. PMID:27604165
NASA Astrophysics Data System (ADS)
Jun, Xu; Bo, You; Xin, Li; Juan, Cui
2007-12-01
To accurately measure temperatures, a novel temperature sensor based on a quartz tuning fork resonator has been designed. The principle of the quartz tuning fork temperature sensor is that the resonant frequency of the quartz resonator changes with the variation in temperature. This type of tuning fork resonator has been designed with a new doubly rotated cut work at flexural vibration mode as temperature sensor. The characteristics of the temperature sensor were evaluated and the results sufficiently met the target of development for temperature sensor. The theoretical model for temperature sensing has been developed and built. The sensor structure was analysed by finite element method (FEM) and optimized, including tuning fork geometry, tine electrode pattern and the sensor's elements size. The performance curve of output versus measured temperature is given. The results from theoretical analysis and experiments indicate that the sensor's sensitivity can reach 60 ppm °C-1 with the measured temperature range varying from 0 to 100 °C.
Healing from Childhood Sexual Abuse: A Theoretical Model
Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner
2014-01-01
Childhood sexual abuse (CSA) is a prevalent social and healthcare problem. The processes by which individuals heal from CSA are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from CSA. Community recruitment for an on-going, broader project on sexual violence throughout the lifespan, referred to as the Sexual Violence Study, yielded a subsample of 48 women and 47 men who had experienced CSA. During semi-structured, open-ended interviews, they were asked to describe their experiences with healing from CSA and other victimization throughout their lives. Constructivist grounded theory methods were used with these data to develop constructs and hypotheses about healing. For the Sexual Violence Study, frameworks were developed to describe the participants' life patterns, parenting experiences, disclosures about sexual violence, spirituality, and altruism. Several analytic techniques were used to synthesize the findings of these frameworks to develop an overarching theoretical model that describes healing from CSA. The model includes four stages of healing, five domains of functioning, and six enabling factors that facilitate movement from one stage to the next. The findings indicate that healing is a complex and dynamic trajectory. The model can be used to alert clinicians to a variety of processes and enabling factors that facilitate healing in several domains and to guide discussions on important issues related to healing from CSA. PMID:21812546
Theoretical consideration of a microcontinuum model of graphene
NASA Astrophysics Data System (ADS)
Yang, Gang; Huang, Zaixing; Gao, Cun-Fa; Zhang, Bin
2016-05-01
A microcontinuum model of graphene is proposed based on micromorphic theory, in which the planar Bravais cell of graphene crystal is taken as the basal element of finite size. Governing equations including the macro-displacements and the micro-deformations of the basal element are modified and derived in global coordinates. Since independent freedom degrees of the basal element are closely related to the modes of phonon dispersions, the secular equations in micromorphic form are obtained by substituting the assumed harmonic wave equations into the governing equations, and simplified further according to the properties of phonon dispersion relations of two-dimensional (2D) crystals. Thus, the constitutive equations of the microcontinuum model are confirmed, in which the constitutive constants are determined by fitting the data of experimental and theoretical phonon dispersion relations in literature respectively. By employing the 2D microcontinuum model, we obtained sound velocities, Rayleigh velocity and elastic moduli of graphene, which show good agreements with available experimental or theoretical values, indicating that the current model would be another efficient and reliable methodology to study the mechanical behaviors of graphene.
NASA Astrophysics Data System (ADS)
Qiuyang, He; Yue, Xu; Feifei, Zhao
2013-10-01
An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
NASA Astrophysics Data System (ADS)
Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-01
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL
NASA Astrophysics Data System (ADS)
Ciambur, B. C.
2015-09-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
Game-Theoretic Models of Information Overload in Social Networks
NASA Astrophysics Data System (ADS)
Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin
We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.
Information-Theoretic Benchmarking of Land Surface Models
NASA Astrophysics Data System (ADS)
Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong
2016-04-01
Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
Theoretical models for coronary vascular biomechanics: Progress & challenges
Waters, Sarah L.; Alastruey, Jordi; Beard, Daniel A.; Bovendeerd, Peter H.M.; Davies, Peter F.; Jayaraman, Girija; Jensen, Oliver E.; Lee, Jack; Parker, Kim H.; Popel, Aleksander S.; Secomb, Timothy W.; Siebes, Maria; Sherwin, Spencer J.; Shipley, Rebecca J.; Smith, Nicolas P.; van de Vosse, Frans N.
2013-01-01
A key aim of the cardiac Physiome Project is to develop theoretical models to simulate the functional behaviour of the heart under physiological and pathophysiological conditions. Heart function is critically dependent on the delivery of an adequate blood supply to the myocardium via the coronary vasculature. Key to this critical function of the coronary vasculature is system dynamics that emerge via the interactions of the numerous constituent components at a range of spatial and temporal scales. Here, we focus on several components for which theoretical approaches can be applied, including vascular structure and mechanics, blood flow and mass transport, flow regulation, angiogenesis and vascular remodelling, and vascular cellular mechanics. For each component, we summarise the current state of the art in model development, and discuss areas requiring further research. We highlight the major challenges associated with integrating the component models to develop a computational tool that can ultimately be used to simulate the responses of the coronary vascular system to changing demands and to diseases and therapies. PMID:21040741
Self-Assembled Magnetic Surface Swimmers: Theoretical Model
NASA Astrophysics Data System (ADS)
Aranson, Igor; Belkin, Maxim; Snezhko, Alexey
2009-03-01
The mechanisms of self-propulsion of living microorganisms are a fascinating phenomenon attracting enormous attention in the physics community. A new type of self-assembled micro-swimmers, magnetic snakes, is an excellent tool to model locomotion in a simple table-top experiment. The snakes self-assemble from a dispersion of magnetic microparticles suspended on the liquid-air interface and subjected to an alternating magnetic field. Formation and dynamics of these swimmers are captured in the framework of theoretical model coupling paradigm equation for the amplitude of surface waves, conservation law for the density of particles, and the Navier-Stokes equation for hydrodynamic flows. The results of continuum modeling are supported by hybrid molecular dynamics simulations of magnetic particles floating on the surface of fluid.
Theoretical Models and Operational Frameworks in Public Health Ethics
Petrini, Carlo
2010-01-01
The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided. PMID:20195441
Theoretical models and operational frameworks in public health ethics.
Petrini, Carlo
2010-01-01
The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided.
Theoretical Modeling of Mechanical-Electrical Coupling of Carbon Nanotubes
Lu, Jun-Qiang; Jiang, Hanqiang
2008-01-01
Carbon nanotubes have been studied extensively due to their unique properties, ranging from electrical, mechanical, optical, to thermal properties. The coupling between the electrical and mechanical properties of carbon nanotubes has emerged as a new field, which raises both interesting fundamental problems and huge application potentials. In this article, we will review our recently work on the theoretical modeling on mechanical-electrical coupling of carbon nanotubes subject to various loading conditions, including tension/compression, torsion, and squashing. Some related work by other groups will be also mentioned.
Accuracy Analysis of a Box-wing Theoretical SRP Model
NASA Astrophysics Data System (ADS)
Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui
2016-07-01
For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Jeong, Donghui; Szalay, Alexander S.
2016-01-01
Accurate modelling of non-linearities in the galaxy bispectrum, the Fourier transform of the galaxy three-point correlation function, is essential to fully exploit it as a cosmological probe. In this paper, we present numerical and theoretical challenges in modelling the non-linear bispectrum. First, we test the robustness of the matter bispectrum measured from N-body simulations using different initial conditions generators. We run a suite of N-body simulations using the Zel'dovich approximation and second-order Lagrangian perturbation theory (2LPT) at different starting redshifts, and find that transients from initial decaying modes systematically reduce the non-linearities in the matter bispectrum. To achieve 1 per cent accuracy in the matter bispectrum at z ≤ 3 on scales k < 1 h Mpc-1, 2LPT initial conditions generator with initial redshift z ≳ 100 is required. We then compare various analytical formulas and empirical fitting functions for modelling the non-linear matter bispectrum, and discuss the regimes for which each is valid. We find that the next-to-leading order (one-loop) correction from standard perturbation theory matches with N-body results on quasi-linear scales for z ≥ 1. We find that the fitting formula in Gil-Marín et al. accurately predicts the matter bispectrum for z ≤ 1 on a wide range of scales, but at higher redshifts, the fitting formula given in Scoccimarro & Couchman gives the best agreement with measurements from N-body simulations.
Mason, Philip E; Wernersson, Erik; Jungwirth, Pavel
2012-07-19
The carbonate ion plays a central role in the biochemical formation of the shells of aquatic life, which is an important path for carbon dioxide sequestration. Given the vital role of carbonate in this and other contexts, it is imperative to develop accurate models for such a high charge density ion. As a divalent ion, carbonate has a strong polarizing effect on surrounding water molecules. This raises the question whether it is possible to describe accurately such systems without including polarization. It has recently been suggested the lack of electronic polarization in nonpolarizable water models can be effectively compensated by introducing an electronic dielectric continuum, which is with respect to the forces between atoms equivalent to rescaling the ionic charges. Given how widely nonpolarizable models are used to model electrolyte solutions, establishing the experimental validity of this suggestion is imperative. Here, we examine a stringent test for such models: a comparison of the difference of the neutron scattering structure factors of K2CO3 vs KNO3 solutions and that predicted by molecular dynamics simulations for various models of the same systems. We compare standard nonpolarizable simulations in SPC/E water to analogous simulations with effective ion charges, as well as simulations in explicitly polarizable POL3 water (which, however, has only about half the experimental polarizability). It is found that the simulation with rescaled charges is in a very good agreement with the experimental data, which is significantly better than for the nonpolarizable simulation and even better than for the explicitly polarizable POL3 model.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
NASA Astrophysics Data System (ADS)
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
Double cluster heads model for secure and accurate data fusion in wireless sensor networks.
Fu, Jun-Song; Liu, Yun
2015-01-19
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
Applying an accurate spherical model to gamma-ray burst afterglow observations
NASA Astrophysics Data System (ADS)
Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.
2013-05-01
We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1992-01-01
The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J
2013-07-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
NASA Astrophysics Data System (ADS)
Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart
2013-09-01
The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges
2014-04-01
Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation
Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges
2014-04-01
Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation
NMR relaxation induced by iron oxide particles: testing theoretical models
NASA Astrophysics Data System (ADS)
Gossuin, Y.; Orlando, T.; Basini, M.; Henrard, D.; Lascialfari, A.; Mattea, C.; Stapf, S.; Vuong, Q. L.
2016-04-01
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
NMR relaxation induced by iron oxide particles: testing theoretical models.
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
NMR relaxation induced by iron oxide particles: testing theoretical models.
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water. PMID:26933908
Raindrop size distribution: Fitting performance of common theoretical models
NASA Astrophysics Data System (ADS)
Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.
2016-10-01
Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.
Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.
Qu, Xiaohui; Persson, Kristin A
2016-09-13
A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744
Vibration exercise for treatment of osteoporosis: a theoretical model.
Aleyaasin, M; Harrigan, J J
2008-10-01
Orthopaedic rehabilitation of osteoporosis by muscle vibration exercise is investigated theoretically using Wolff's theory of strain-induced bone 'remodelling'. The remodelling equation for finite amplitude vibration to be transmitted to the bone via muscle corresponds to a slowly time-varying non-linear dynamic system. This slowly time-varying system is governed by a Riccatti equation with rapidly varying coefficients that oscillate with the frequency of the applied vibration. An averaging technique is used to determine the effective force transmitted to the bone. This force is expressed in terms of the stiffness and damping parameters of the connected muscle. The analytical result predicts that, in order to obtain bone reinforcement, the frequency and amplitude of vibration should not exceed specified levels. Furthermore, low-frequency vibration does not stimulate the bone sufficiently to cause significant remodelling. The theoretical model herein confirms the clinical recommendations regarding vibration exercise and its effects on rehabilitation. In a numerical example, the model predicts that a femur with reduced bone mass as a result of bed rest will be healed completely by vibration consisting of an acceleration of 2g applied at a frequency of 30 Hz over a period of 250 days.
Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling
NASA Astrophysics Data System (ADS)
Zhang, Pengjie; Zheng, Yi; Jing, Yipeng
2015-02-01
Cosmology based on large scale peculiar velocity prefers volume weighted velocity statistics. However, measuring the volume weighted velocity statistics from inhomogeneously distributed galaxies (simulation particles/halos) suffers from an inevitable and significant sampling artifact. We study this sampling artifact in the velocity power spectrum measured by the nearest particle velocity assignment method by Zheng et al., [Phys. Rev. D 88, 103510 (2013).]. We derive the analytical expression of leading and higher order terms. We find that the sampling artifact suppresses the z =0 E -mode velocity power spectrum by ˜10 % at k =0.1 h /Mpc , for samples with number density 10-3 (Mpc /h )-3 . This suppression becomes larger for larger k and for sparser samples. We argue that this source of systematic errors in peculiar velocity cosmology, albeit severe, can be self-calibrated in the framework of our theoretical modelling. We also work out the sampling artifact in the density-velocity cross power spectrum measurement. A more robust evaluation of related statistics through simulations will be presented in a companion paper by Zheng et al., [Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modelling, arXiv:1409.6809.]. We also argue that similar sampling artifact exists in other velocity assignment methods and hence must be carefully corrected to avoid systematic bias in peculiar velocity cosmology.
Toward a theoretically based measurement model of the good life.
Cheung, C K
1997-06-01
A theoretically based conceptualization of the good life should differentiate 4 dimensions-the hedonist good life, the dialectical good life, the humanist good life, and the formalist good life. These 4 dimensions incorporate previous fragmentary measures, such as life satisfaction, depression, work alienation, and marital satisfaction, to produce an integrative view. In the present study, 276 Hong Kong Chinese husbands and wives responded to a survey of 13 indicators for these 4 good life dimensions. Confirmatory hierarchical factor analysis showed that these indicators identified the 4 dimensions of the good life, which in turn converged to identify a second-order factor of the overall good life. The model demonstrates discriminant validity in that the first-order factors had high loadings on the overall good life factor despite being linked by a social desirability factor. Analysis further showed that the second-order factor model applied equally well to husbands and wives. Thus, the conceptualization appears to be theoretically and empirically adequate in incorporating previous conceptualizations of the good life. PMID:9168589
Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Jenny, Patrick
2013-08-01
Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.
NASA Astrophysics Data System (ADS)
Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.
2015-04-01
We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1984-01-01
Models and spectra of sunspots were studied, because they are important to energy balance and variability discussions. Sunspot observations in the ultraviolet region 140 to 168 nn was obtained by the NRL High Resolution Telescope and Spectrograph. Extensive photometric observations of sunspot umbrae and prenumbrae in 10 chanels covering the wavelength region 387 to 3800 nm were made. Cool star opacities and model atmospheres were computed. The Sun is the first testcase, both to check the opacity calculations against the observed solar spectrum, and to check the purely theoretical model calculation against the observed solar energy distribution. Line lists were finally completed for all the molecules that are important in computing statistical opacities for energy balance and for radiative rate calculations in the Sun (except perhaps for sunspots). Because many of these bands are incompletely analyzed in the laboratory, the energy levels are not well enough known to predict wavelengths accurately for spectrum synthesis and for detailed comparison with the observations.
Theoretical models for Type I and Type II supernova
Woosley, S.E.; Weaver, T.A.
1985-01-01
Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate /sup 12/C(..cap alpha..,..gamma..)/sup 16/O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs.
A Lifecourse Model of Multimorbidity Resilience: Theoretical and Research Developments.
Wister, Andrew V; Coatta, Katherine L; Schuurman, Nadine; Lear, Scott A; Rosin, Miriam; MacKey, Dawn
2016-04-01
The purpose of this article is to advance a Lifecourse Model of Multimorbidity Resilience. It focuses on the ways in which individuals face adversities associated with multimorbidity and regain a sense of wellness through a complex, dynamic phenomenon termed resilience. A comprehensive review of 112 publications (between 1995 and 2015) was conducted using several comprehensive electronic data bases. Two independent researchers extracted and synthesized resilience literature with specific applications to chronic illness. The article outlines five stages of theoretical development of resilience, synthesizes these with the aging and chronic illness literature, builds a rationale for a lifecourse approach to resilience, and applies the model to multimorbidity. Cultivating and maintaining resilience is fundamental to functioning and quality of life for those with multimorbidity. We found that there are a number of gaps in both basic and applied research that need to be filled to advance knowledge and practice based on resilience approaches. PMID:27076489
Development of theoretical models of integrated millimeter wave antennas
NASA Technical Reports Server (NTRS)
Yngvesson, K. Sigfrid; Schaubert, Daniel H.
1991-01-01
Extensive radiation patterns for Linear Tapered Slot Antenna (LTSA) Single Elements are presented. The directivity of LTSA elements is predicted correctly by taking the cross polarized pattern into account. A moment method program predicts radiation patterns for air LTSAs with excellent agreement with experimental data. A moment method program was also developed for the task LTSA Array Modeling. Computations performed with this program are in excellent agreement with published results for dipole and monopole arrays, and with waveguide simulator experiments, for more complicated structures. Empirical modeling of LTSA arrays demonstrated that the maximum theoretical element gain can be obtained. Formulations were also developed for calculating the aperture efficiency of LTSA arrays used in reflector systems. It was shown that LTSA arrays used in multibeam systems have a considerable advantage in terms of higher packing density, compared with waveguide feeds. Conversion loss of 10 dB was demonstrated at 35 GHz.
A Lifecourse Model of Multimorbidity Resilience: Theoretical and Research Developments.
Wister, Andrew V; Coatta, Katherine L; Schuurman, Nadine; Lear, Scott A; Rosin, Miriam; MacKey, Dawn
2016-04-01
The purpose of this article is to advance a Lifecourse Model of Multimorbidity Resilience. It focuses on the ways in which individuals face adversities associated with multimorbidity and regain a sense of wellness through a complex, dynamic phenomenon termed resilience. A comprehensive review of 112 publications (between 1995 and 2015) was conducted using several comprehensive electronic data bases. Two independent researchers extracted and synthesized resilience literature with specific applications to chronic illness. The article outlines five stages of theoretical development of resilience, synthesizes these with the aging and chronic illness literature, builds a rationale for a lifecourse approach to resilience, and applies the model to multimorbidity. Cultivating and maintaining resilience is fundamental to functioning and quality of life for those with multimorbidity. We found that there are a number of gaps in both basic and applied research that need to be filled to advance knowledge and practice based on resilience approaches.
NASA Technical Reports Server (NTRS)
Kopasakis, George
2014-01-01
The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.
O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P
2016-02-15
There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic.
Fu, Q.; Sun, W.B.; Yang, P.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2016-01-01
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
NASA Astrophysics Data System (ADS)
Fu, Qiang; Yang, Ping; Sun, W. B.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (Dge). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is 2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2016-01-01
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553
Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
PET-Specific Parameters and Radiotracers in Theoretical Tumour Modelling
Marcu, Loredana G.; Bezak, Eva
2015-01-01
The innovation of computational techniques serves as an important step toward optimized, patient-specific management of cancer. In particular, in silico simulation of tumour growth and treatment response may eventually yield accurate information on disease progression, enhance the quality of cancer treatment, and explain why certain therapies are effective where others are not. In silico modelling is demonstrated to considerably benefit from information obtainable with PET and PET/CT. In particular, models have successfully integrated tumour glucose metabolism, cell proliferation, and cell oxygenation from multiple tracers in order to simulate tumour behaviour. With the development of novel radiotracers to image additional tumour phenomena, such as pH and gene expression, the value of PET and PET/CT data for use in tumour models will continue to grow. In this work, the use of PET and PET/CT information in in silico tumour models is reviewed. The various parameters that can be obtained using PET and PET/CT are detailed, as well as the radiotracers that may be used for this purpose, their utility, and limitations. The biophysical measures used to quantify PET and PET/CT data are also described. Finally, a list of in silico models that incorporate PET and/or PET/CT data is provided and reviewed. PMID:25788973
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, J. A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.
Bornefalk, Hans; Persson, Mats; Danielsson, Mats
2015-03-01
Material basis decomposition in the sinogram domain requires accurate knowledge of the forward model in spectral computed tomography (CT). Misspecifications over a certain limit will result in biased estimates and make quantum limited (where statistical noise dominates) quantitative CT difficult. We present a method whereby users can determine the degree of allowed misspecification error in a spectral CT forward model and still have quantification errors that are limited by the inherent statistical uncertainty. For a particular silicon detector based spectral CT system, we conclude that threshold determination is the most critical factor and that the bin edges need to be known to within 0.15 keV in order to be able to perform quantum limited material basis decomposition. The method as such is general to all multibin systems.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach.
Saa, Pedro A; Nielsen, Lars K
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof; Mroczka, Janusz; Wriedt, Thomas; Riefler, Norbert
2014-06-01
In many branches of science experiments are expensive, require specialist equipment or are very time consuming. Studying the light scattering phenomenon by fractal aggregates can serve as an example. Light scattering simulations can overcome these problems and provide us with theoretical, additional data which complete our study. For this reason a fractal-like aggregate model as well as fast aggregation codes are needed. Until now various computer models, that try to mimic the physics behind this phenomenon, have been developed. However, their implementations are mostly based on a trial-and-error procedure. Such approach is very time consuming and the morphological parameters of resulting aggregates are not exact because the postconditions (e.g. the position error) cannot be very strict. In this paper we present a very fast and accurate implementation of a tunable aggregation algorithm based on the work of Filippov et al. (2000). Randomization is reduced to its necessary minimum (our technique can be more than 1000 times faster than standard algorithms) and the position of a new particle, or a cluster, is calculated with algebraic methods. Therefore, the postconditions can be extremely strict and the resulting errors negligible (e.g. the position error can be recognized as non-existent). In our paper two different methods, which are based on the particle-cluster (PC) and the cluster-cluster (CC) aggregation processes, are presented.
Are Quasi-Steady-State Approximated Models Suitable for Quantifying Intrinsic Noise Accurately?
Sengupta, Dola; Kar, Sandip
2015-01-01
Large gene regulatory networks (GRN) are often modeled with quasi-steady-state approximation (QSSA) to reduce the huge computational time required for intrinsic noise quantification using Gillespie stochastic simulation algorithm (SSA). However, the question still remains whether the stochastic QSSA model measures the intrinsic noise as accurately as the SSA performed for a detailed mechanistic model or not? To address this issue, we have constructed mechanistic and QSSA models for few frequently observed GRNs exhibiting switching behavior and performed stochastic simulations with them. Our results strongly suggest that the performance of a stochastic QSSA model in comparison to SSA performed for a mechanistic model critically relies on the absolute values of the mRNA and protein half-lives involved in the corresponding GRN. The extent of accuracy level achieved by the stochastic QSSA model calculations will depend on the level of bursting frequency generated due to the absolute value of the half-life of either mRNA or protein or for both the species. For the GRNs considered, the stochastic QSSA quantifies the intrinsic noise at the protein level with greater accuracy and for larger combinations of half-life values of mRNA and protein, whereas in case of mRNA the satisfactory accuracy level can only be reached for limited combinations of absolute values of half-lives. Further, we have clearly demonstrated that the abundance levels of mRNA and protein hardly matter for such comparison between QSSA and mechanistic models. Based on our findings, we conclude that QSSA model can be a good choice for evaluating intrinsic noise for other GRNs as well, provided we make a rational choice based on experimental half-life values available in literature. PMID:26327626
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-01-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Charge fractionalization in oxide heterostructures: A field-theoretical model
NASA Astrophysics Data System (ADS)
Karthick Selvan, M.; Panigrahi, Prasanta K.
2016-06-01
LaAlO3/SrTiO3 heterostructure with polar and non-polar constituents has been shown to exhibit interface metallic conductivity due to fractional charge transfer to the interface. The interface reconstruction by electron redistribution along the (001) orientation, in which half of an electron is transferred per two-dimensional unit cell to the adjacent planes, resulting in a net transfer of half of the charge to both the interface and topmost atomic planes, has been ascribed to a polar discontinuity at the interface in the polar catastrophe model. This avoids the divergence of the electrostatic potential, as the number of layers are increased, producing an oscillatory electric field and finite potential. Akin to the description of charge fractionalization in quasi-one-dimensional polyacetylene by the field-theoretic Jackiw-Rebbi model with fermions interacting with a topologically non-trivial background field, we show an analogous connection between the polar catastrophe model and the Bell-Rajaraman model, where the charge fractionalization occurs in the soliton free sector as an end effect.
Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model
NASA Astrophysics Data System (ADS)
Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.
2007-05-01
Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation
NASA Astrophysics Data System (ADS)
Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.
2006-12-01
Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the
Modeling of rolling element bearing mechanics. Theoretical manual
NASA Astrophysics Data System (ADS)
Merchant, David H.; Greenhill, Lyn M.
1994-10-01
This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Modeling of rolling element bearing mechanics. Theoretical manual
NASA Technical Reports Server (NTRS)
Merchant, David H.; Greenhill, Lyn M.
1994-01-01
This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?
Searcy, Christopher A; Shaffer, H Bradley
2016-04-01
Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071
Lito, Patrícia F; Magalhães, Ana L; Gomes, José R B; Silva, Carlos M
2013-05-17
In this work it is presented a new model for accurate calculation of binary diffusivities (D12) of solutes infinitely diluted in gas, liquid and supercritical solvents. It is based on a Lennard-Jones (LJ) model, and contains two parameters: the molecular diameter of the solvent and a diffusion activation energy. The model is universal since it is applicable to polar, weakly polar, and non-polar solutes and/or solvents, over wide ranges of temperature and density. Its validation was accomplished with the largest database ever compiled, namely 487 systems with 8293 points totally, covering polar (180 systems/2335 points) and non-polar or weakly polar (307 systems/5958 points) mixtures, for which the average errors were 2.65% and 2.97%, respectively. With regard to the physical states of the systems, the average deviations achieved were 1.56% for gaseous (73 systems/1036 points), 2.90% for supercritical (173 systems/4398 points), and 2.92% for liquid (241 systems/2859 points). Furthermore, the model exhibited excellent prediction ability. Ten expressions from the literature were adopted for comparison, but provided worse results or were not applicable to polar systems. A spreadsheet for D12 calculation is provided online for users in Supplementary Data.
Towards an accurate model of the redshift-space clustering of haloes in the quasi-linear regime
NASA Astrophysics Data System (ADS)
Reid, Beth A.; White, Martin
2011-11-01
Observations of redshift-space distortions in spectroscopic galaxy surveys offer an attractive method for measuring the build-up of cosmological structure, which depends both on the expansion rate of the Universe and on our theory of gravity. The statistical precision with which redshift-space distortions can now be measured demands better control of our theoretical systematic errors. While many recent studies focus on understanding dark matter clustering in redshift space, galaxies occupy special places in the universe: dark matter haloes. In our detailed study of halo clustering and velocity statistics in 67.5 h-3 Gpc3 of N-body simulations, we uncover a complex dependence of redshift-space clustering on halo bias. We identify two distinct corrections which affect the halo redshift-space correlation function on quasi-linear scales (˜30-80 h-1 Mpc): the non-linear mapping between real-space and redshift-space positions, and the non-linear suppression of power in the velocity divergence field. We model the first non-perturbatively using the scale-dependent Gaussian streaming model, which we show is accurate at the <0.5 (2) per cent level in transforming real-space clustering and velocity statistics into redshift space on scales s > 10 (s > 25) h-1 Mpc for the monopole (quadrupole) halo correlation functions. The dominant correction to the Kaiser limit in this model scales like b3. We use standard perturbation theory to predict the real-space pairwise halo velocity statistics. Our fully analytic model is accurate at the 2 per cent level only on scales s > 40 h-1 Mpc for the range of halo masses we studied (with b= 1.4-2.8). We find that recent models of halo redshift-space clustering that neglect the corrections from the bispectrum and higher order terms from the non-linear real-space to redshift-space mapping will not have the accuracy required for current and future observational analyses. Finally, we note that our simulation results confirm the essential but non
Theoretical model for forming limit diagram predictions without initial inhomogeneity
NASA Astrophysics Data System (ADS)
Gologanu, Mihai; Comsa, Dan Sorin; Banabic, Dorel
2013-05-01
We report on our attempts to build a theoretical model for determining forming limit diagrams (FLD) based on limit analysis that, contrary to the well-known Marciniak and Kuczynski (M-K) model, does not assume the initial existence of a region with material or geometrical inhomogeneity. We first give a new interpretation based on limit analysis for the onset of necking in the M-K model. Considering the initial thickness defect along a narrow band as postulated by the M-K model, we show that incipient necking is a transition in the plastic mechanism from one of plastic flow in both the sheet and the band to another one where the sheet becomes rigid and all plastic deformation is localized in the band. We then draw on some analogies between the onset of necking in a sheet and the onset of coalescence in a porous bulk body. In fact, the main advance in coalescence modeling has been based on a similar limit analysis with an important new ingredient: the evolution of the spatial distribution of voids, due to the plastic deformation, creating weaker regions with higher porosity surrounded by sound regions with no voids. The onset of coalescence is precisely the transition from a mechanism of plastic deformation in both regions to another one, where the sound regions are rigid. We apply this new ingredient to a necking model based on limit analysis, for the first quadrant of the FLD and a porous sheet. We use Gurson's model with some recent extensions to model the porous material. We follow both the evolution of a homogeneous sheet and the evolution of the distribution of voids. At each moment we test for a potential change of plastic mechanism, by comparing the stresses in the uniform region to those in a virtual band with a larger porosity. The main difference with the coalescence of voids in a bulk solid is that the plastic mechanism for a sheet admits a supplementary degree of freedom, namely the change in the thickness of the virtual band. For strain ratios close to
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel
2016-02-01
The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region
NASA Astrophysics Data System (ADS)
Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.
2016-04-01
Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.
NASA Technical Reports Server (NTRS)
Livne, Eli
1989-01-01
A method is presented for generating mode shapes for model order reduction in a way that leads to accurate calculation of eigenvalue derivatives and eigenvalues for a class of control augmented structures. The method is based on treating degrees of freedom where control forces act or masses are changed in a manner analogous to that used for boundary degrees of freedom in component mode synthesis. It is especially suited for structures controlled by a small number of actuators and/or tuned by a small number of concentrated masses whose positions are predetermined. A control augmented multispan beam with closely spaced natural frequencies is used for numerical experimentation. A comparison with reduced-order eigenvalue sensitivity calculations based on the normal modes of the structure shows that the method presented produces significant improvements in accuracy.
An Accurately Stable Thermo-Hydro-Mechanical Model for Geo-Environmental Simulations
NASA Astrophysics Data System (ADS)
Gambolati, G.; Castelletto, N.; Ferronato, M.
2011-12-01
In real-world applications involving complex 3D heterogeneous domains the use of advanced numerical algorithms is of paramount importance to stabily, accurately and efficiently solve the coupled system of partial differential equations governing the mass and the energy balance in deformable porous media. The present communication discusses a novel coupled 3-D numerical model based on a suitable combination of Finite Elements (FEs), Mixed FEs (MFEs), and Finite Volumes (FVs) developed with the aim at stabilizing the numerical solution. Elemental pressures and temperatures, nodal displacements and face normal Darcy and Fourier fluxes are the selected primary variables. Such an approach provides an element-wise conservative velocity field, with both pore pressure and stress having the same order of approximation, and allows for the accurate prediction of sharp temperature convective fronts. In particular, the flow-deformation problem is addressed jointly by FEs and MFEs and is coupled to the heat transfer equation using an ad hoc time splitting technique that separates the time temperature evolution into two partial differential equations, accounting for the convective and the diffusive contribution, respectively. The convective part is addressed by a FV scheme which proves effective in treating sharp convective fronts, while the diffusive part is solved by a MFE formulation. A staggered technique is then implemented for the global solution of the coupled thermo-hydro-mechanical problem, solving iteratively the flow-deformation and the heat transport at each time step. Finally, the model is successfully experimented with in realistic applications dealing with geothermal energy extraction and injection.
Theoretical models of adaptive energy management in small wintering birds.
Brodin, Anders
2007-10-29
Many small passerines are resident in forests with very cold winters. Considering their size and the adverse conditions, this is a remarkable feat that requires optimal energy management in several respects, for example regulation of body fat reserves, food hoarding and night-time hypothermia. Besides their beneficial effect on survival, these behaviours also entail various costs. The scenario is complex with many potentially important factors, and this has made 'the little bird in winter' a popular topic for theoretic modellers. Many predictions could have been made intuitively, but models have been especially important when many factors interact. Predictions that hardly could have been made without models include: (i) the minimum mortality occurs at the fat level where the marginal values of starvation risk and predation risk are equal; (ii) starvation risk may also decrease when food requirement increases; (iii) mortality from starvation may correlate positively with fat reserves; (iv) the existence of food stores can increase fitness substantially even if the food is not eaten; (v) environmental changes may induce increases or decreases in the level of reserves depending on whether changes are temporary or permanent; and (vi) hoarding can also evolve under seemingly group-selectionistic conditions.
Computational Graph Theoretical Model of the Zebrafish Sensorimotor Pathway
NASA Astrophysics Data System (ADS)
Peterson, Joshua M.; Stobb, Michael; Mazzag, Bori; Gahtan, Ethan
2011-11-01
Mapping the detailed connectivity patterns of neural circuits is a central goal of neuroscience and has been the focus of extensive current research [4, 3]. The best quantitative approach to analyze the acquired data is still unclear but graph theory has been used with success [3, 1]. We present a graph theoretical model with vertices and edges representing neurons and synaptic connections, respectively. Our system is the zebrafish posterior lateral line sensorimotor pathway. The goal of our analysis is to elucidate mechanisms of information processing in this neural pathway by comparing the mathematical properties of its graph to those of other, previously described graphs. We create a zebrafish model based on currently known anatomical data. The degree distributions and small-world measures of this model is compared to small-world, random and 3-compartment random graphs of the same size (with over 2500 nodes and 160,000 connections). We find that the zebrafish graph shows small-worldness similar to other neural networks and does not have a scale-free distribution of connections.
The S-model: A highly accurate MOST model for CAD
NASA Astrophysics Data System (ADS)
Satter, J. H.
1986-09-01
A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.
Khalilian, Morteza; Navidbakhsh, Mahdi; Valojerdi, Mojtaba Rezazadeh; Chizari, Mahmoud; Yazdi, Poopak Eftekhari
2010-04-01
The zona pellucida (ZP) is the spherical layer that surrounds the mammalian oocyte. The physical hardness of this layer plays a crucial role in fertilization and is largely unknown because of the lack of appropriate measuring and modelling methods. The aim of this study is to measure the biomechanical properties of the ZP of human/mouse ovum and to test the hypothesis that Young's modulus of the ZP varies with fertilization. Young's moduli of ZP are determined before and after fertilization by using the micropipette aspiration technique, coupled with theoretical models of the oocyte as an elastic incompressible half-space (half-space model), an elastic compressible bilayer (layered model) or an elastic compressible shell (shell model). Comparison of the models shows that incorporation of the layered geometry of the ovum and the compressibility of the ZP in the layered and shell models may provide a means of more accurately characterizing ZP elasticity. Evaluation of results shows that although the results of the models are different, all confirm that the hardening of ZP will increase following fertilization. As can be seen, different choices of models and experimental parameters can affect the interpretation of experimental data and lead to differing mechanical properties.
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
Franck, Christopher T; Koffarnus, Mikhail N; House, Leanna L; Bickel, Warren K
2015-01-01
The study of delay discounting, or valuation of future rewards as a function of delay, has contributed to understanding the behavioral economics of addiction. Accurate characterization of discounting can be furthered by statistical model selection given that many functions have been proposed to measure future valuation of rewards. The present study provides a convenient Bayesian model selection algorithm that selects the most probable discounting model among a set of candidate models chosen by the researcher. The approach assigns the most probable model for each individual subject. Importantly, effective delay 50 (ED50) functions as a suitable unifying measure that is computable for and comparable between a number of popular functions, including both one- and two-parameter models. The combined model selection/ED50 approach is illustrated using empirical discounting data collected from a sample of 111 undergraduate students with models proposed by Laibson (1997); Mazur (1987); Myerson & Green (1995); Rachlin (2006); and Samuelson (1937). Computer simulation suggests that the proposed Bayesian model selection approach outperforms the single model approach when data truly arise from multiple models. When a single model underlies all participant data, the simulation suggests that the proposed approach fares no worse than the single model approach.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
A theoretical model of speed-dependent steering torque for rolling tyres
NASA Astrophysics Data System (ADS)
Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing
2016-04-01
It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.
A theoretical model for the Lorentz force particle analyzer
NASA Astrophysics Data System (ADS)
Moreau, René; Tao, Zhen; Wang, Xiaodong
2016-07-01
In a previous paper [X. Wang et al., J. Appl. Phys. 120, 014903 (2016)], several experimental devices have been presented, which demonstrate the efficiency of electromagnetic techniques for detecting and sizing electrically insulating particles entrained in the flow of a molten metal. In each case, a non-uniform magnetic field is applied across the flow of the electrically conducting liquid, thereby generating a braking Lorentz force on this moving medium and a reaction force on the magnet, which tends to be entrained in the flow direction. The purpose of this letter is to derive scaling laws for this Lorentz force from an elementary theoretical model. For simplicity, as in the experiments, the flowing liquid is modeled as a solid body moving with a uniform velocity U. The eddy currents in the moving domain are derived from the classic induction equation and Ohm's law, and expressions for the Lorentz force density j ×B and for its integral over the entire moving domain follow. The insulating particles that are eventually present and entrained with this body are then treated as small disturbances in a classic perturbation analysis, thereby leading to scaling laws for the pulses they generate in the Lorentz force. The purpose of this letter is both to illustrate the eddy currents without and with insulating particles in the electrically conducting liquid and to derive a key relation between the pulses in the Lorentz force and the main parameters (particle volume and dimensions of the region subjected to the magnetic field).
Collective behavior in animal groups: theoretical models and empirical studies
Giardina, Irene
2008-01-01
Collective phenomena in animal groups have attracted much attention in the last years, becoming one of the hottest topics in ethology. There are various reasons for this. On the one hand, animal grouping provides a paradigmatic example of self-organization, where collective behavior emerges in absence of centralized control. The mechanism of group formation, where local rules for the individuals lead to a coherent global state, is very general and transcends the detailed nature of its components. In this respect, collective animal behavior is a subject of great interdisciplinary interest. On the other hand, there are several important issues related to the biological function of grouping and its evolutionary success. Research in this field boasts a number of theoretical models, but much less empirical results to compare with. For this reason, even if the general mechanisms through which self-organization is achieved are qualitatively well understood, a quantitative test of the models assumptions is still lacking. New analysis on large groups, which require sophisticated technological procedures, can provide the necessary empirical data. PMID:19404431
A game theoretic model of drug launch in India.
Bhaduri, Saradindu; Ray, Amit Shovon
2006-01-01
There is a popular belief that drug launch is delayed in developing countries like India because of delayed transfer of technology due to a 'post-launch' imitation threat through weak intellectual property rights (IPR). In fact, this belief has been a major reason for the imposition of the Trade Related Intellectual Property Rights regime under the WTO. This construct undermines the fact that in countries like India, with high reverse engineering capabilities, imitation can occur even before the formal technology transfer, and fails to recognize the first mover advantage in pharmaceutical markets. This paper argues that the first mover advantage is important and will vary across therapeutic areas, especially in developing countries with diverse levels of patient enlightenment and quality awareness. We construct a game theoretic model of incomplete information to examine the delay in drug launch in terms of costs and benefits of first move, assumed to be primarily a function of the therapeutic area of the new drug. Our model shows that drug launch will be delayed only for external (infective/communicable) diseases, while drugs for internal, non-communicable diseases (accounting for the overwhelming majority of new drug discovery) will be launched without delay. PMID:18634701
Posttraumatic Stress Disorder: A Theoretical Model of the Hyperarousal Subtype
Weston, Charles Stewart E.
2014-01-01
Posttraumatic stress disorder (PTSD) is a frequent and distressing mental disorder, about which much remains to be learned. It is a heterogeneous disorder; the hyperarousal subtype (about 70% of occurrences and simply termed PTSD in this paper) is the topic of this article, but the dissociative subtype (about 30% of occurrences and likely involving quite different brain mechanisms) is outside its scope. A theoretical model is presented that integrates neuroscience data on diverse brain regions known to be involved in PTSD, and extensive psychiatric findings on the disorder. Specifically, the amygdala is a multifunctional brain region that is crucial to PTSD, and processes peritraumatic hyperarousal on grounded cognition principles to produce hyperarousal symptoms. Amygdala activity also modulates hippocampal function, which is supported by a large body of evidence, and likewise amygdala activity modulates several brainstem regions, visual cortex, rostral anterior cingulate cortex (rACC), and medial orbitofrontal cortex (mOFC), to produce diverse startle, visual, memory, numbing, anger, and recklessness symptoms. Additional brain regions process other aspects of peritraumatic responses to produce further symptoms. These contentions are supported by neuroimaging, neuropsychological, neuroanatomical, physiological, cognitive, and behavioral evidence. Collectively, the model offers an account of how responses at the time of trauma are transformed into an extensive array of the 20 PTSD symptoms that are specified in the Diagnostic and Statistical Manual of Mental Disorders, Fifth edition. It elucidates the neural mechanisms of a specific form of psychopathology, and accords with the Research Domain Criteria framework. PMID:24772094
Theoretical model of prion propagation: a misfolded protein induces misfolding.
Małolepsza, Edyta; Boniecki, Michal; Kolinski, Andrzej; Piela, Lucjan
2005-05-31
There is a hypothesis that dangerous diseases such as bovine spongiform encephalopathy, Creutzfeldt-Jakob, Alzheimer's, fatal familial insomnia, and several others are induced by propagation of wrong or misfolded conformations of some vital proteins. If for some reason the misfolded conformations were acquired by many such protein molecules it might lead to a "conformational" disease of the organism. Here, a theoretical model of the molecular mechanism of such a conformational disease is proposed, in which a metastable (or misfolded) form of a protein induces a similar misfolding of another protein molecule (conformational autocatalysis). First, a number of amino acid sequences composed of 32 aa have been designed that fold rapidly into a well defined native-like alpha-helical conformation. From a large number of such sequences a subset of 14 had a specific feature of their energy landscape, a well defined local energy minimum (higher than the global minimum for the alpha-helical fold) corresponding to beta-type structure. Only one of these 14 sequences exhibited a strong autocatalytic tendency to form a beta-sheet dimer capable of further propagation of protofibril-like structure. Simulations were done by using a reduced, although of high resolution, protein model and the replica exchange Monte Carlo sampling procedure. PMID:15911770
Theoretical model of prion propagation: A misfolded protein induces misfolding
Małolepsza, Edyta; Boniecki, Michał; Kolinski, Andrzej; Piela, Lucjan
2005-01-01
There is a hypothesis that dangerous diseases such as bovine spongiform encephalopathy, Creutzfeldt-Jakob, Alzheimer's, fatal familial insomnia, and several others are induced by propagation of wrong or misfolded conformations of some vital proteins. If for some reason the misfolded conformations were acquired by many such protein molecules it might lead to a “conformational” disease of the organism. Here, a theoretical model of the molecular mechanism of such a conformational disease is proposed, in which a metastable (or misfolded) form of a protein induces a similar misfolding of another protein molecule (conformational autocatalysis). First, a number of amino acid sequences composed of 32 aa have been designed that fold rapidly into a well defined native-like α-helical conformation. From a large number of such sequences a subset of 14 had a specific feature of their energy landscape, a well defined local energy minimum (higher than the global minimum for the α-helical fold) corresponding to β-type structure. Only one of these 14 sequences exhibited a strong autocatalytic tendency to form a β-sheet dimer capable of further propagation of protofibril-like structure. Simulations were done by using a reduced, although of high resolution, protein model and the replica exchange Monte Carlo sampling procedure. PMID:15911770
NASA Astrophysics Data System (ADS)
Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.
2013-10-01
Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar
modeled temperature is 5-10K too low compared to measurements. This suggests that processes other than radiative heating/cooling by trace
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).
Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert
2015-08-01
The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255
Accurate assessment of mass, models and resolution by small-angle scattering
Rambo, Robert P.; Tainer, John A.
2013-01-01
Modern small angle scattering (SAS) experiments with X-rays or neutrons provide a comprehensive, resolution-limited observation of the thermodynamic state. However, methods for evaluating mass and validating SAS based models and resolution have been inadequate. Here, we define the volume-of-correlation, Vc: a SAS invariant derived from the scattered intensities that is specific to the structural state of the particle, yet independent of concentration and the requirements of a compact, folded particle. We show Vc defines a ratio, Qr, that determines the molecular mass of proteins or RNA ranging from 10 to 1,000 kDa. Furthermore, we propose a statistically robust method for assessing model-data agreements (X2free) akin to cross-validation. Our approach prevents over-fitting of the SAS data and can be used with a newly defined metric, Rsas, for quantitative evaluation of resolution. Together, these metrics (Vc, Qr, X2free, and Rsas) provide analytical tools for unbiased and accurate macromolecular structural characterizations in solution. PMID:23619693
Accurate Universal Models for the Mass Accretion Histories and Concentrations of Dark Matter Halos
NASA Astrophysics Data System (ADS)
Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Börner, G.
2009-12-01
A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when
ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS
Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.
2009-12-10
A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance LAMBDACDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and LAMBDACDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the LAMBDACDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass
Empirical STORM-E Model. [I. Theoretical and Observational Basis
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III
2013-01-01
Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented
Theoretical Modeling of (99)Tc NMR Chemical Shifts.
Hall, Gabriel B; Andersen, Amity; Washton, Nancy M; Chatterjee, Sayandev; Levitskaia, Tatiana G
2016-09-01
Technetium-99 (Tc) displays a rich chemistry due to its wide range of accessible oxidation states (from -I to +VII) and ability to form coordination compounds. Determination of Tc speciation in complex mixtures is a major challenge, and (99)Tc nuclear magnetic resonance (NMR) spectroscopy is widely used to probe chemical environments of Tc in odd oxidation states. However, interpretation of (99)Tc NMR data is hindered by the lack of reference compounds. Density functional theory (DFT) calculations can help to fill this gap, but to date few computational studies have focused on (99)Tc NMR of compounds and complexes. This work evaluates the effectiveness of both pure generalized gradient approximation and their corresponding hybrid functionals, both with and without the inclusion of scalar relativistic effects, to model the (99)Tc NMR spectra of Tc(I) carbonyl compounds. With the exception of BLYP, which performed exceptionally well overall, hybrid functionals with inclusion of scalar relativistic effects are found to be necessary to accurately calculate (99)Tc NMR spectra. The computational method developed was used to tentatively assign an experimentally observed (99)Tc NMR peak at -1204 ppm to fac-Tc(CO)3(OH)3(2-). This study examines the effectiveness of DFT computations for interpretation of the (99)Tc NMR spectra of Tc(I) coordination compounds in high salt alkaline solutions. PMID:27518482
Theoretical Biology and Medical Modelling: ensuring continued growth and future leadership.
Nishiura, Hiroshi; Rietman, Edward A; Wu, Rongling
2013-07-11
Theoretical biology encompasses a broad range of biological disciplines ranging from mathematical biology and biomathematics to philosophy of biology. Adopting a broad definition of "biology", Theoretical Biology and Medical Modelling, an open access journal, considers original research studies that focus on theoretical ideas and models associated with developments in biology and medicine.
NASA Astrophysics Data System (ADS)
Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco
2012-11-01
Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.
Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco
2012-11-21
Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information. PMID:23181288
Accurate Modeling of the Terrestrial Gamma-Ray Background for Homeland Security Applications
Sandness, Gerald A.; Schweppe, John E.; Hensley, Walter K.; Borgardt, James D.; Mitchell, Allison L.
2009-10-24
Abstract–The Pacific Northwest National Laboratory has developed computer models to simulate the use of radiation portal monitors to screen vehicles and cargo for the presence of illicit radioactive material. The gamma radiation emitted by the vehicles or cargo containers must often be measured in the presence of a relatively large gamma-ray background mainly due to the presence of potassium, uranium, and thorium (and progeny isotopes) in the soil and surrounding building materials. This large background is often a significant limit to the detection sensitivity for items of interest and must be modeled accurately for analyzing homeland security situations. Calculations of the expected gamma-ray emission from a disk of soil and asphalt were made using the Monte Carlo transport code MCNP and were compared to measurements made at a seaport with a high-purity germanium detector. Analysis revealed that the energy spectrum of the measured background could not be reproduced unless the model included gamma rays coming from the ground out to distances of at least 300 m. The contribution from beyond about 50 m was primarily due to gamma rays that scattered in the air before entering the detectors rather than passing directly from the ground to the detectors. These skyshine gamma rays contribute tens of percent to the total gamma-ray spectrum, primarily at energies below a few hundred keV. The techniques that were developed to efficiently calculate the contributions from a large soil disk and a large air volume in a Monte Carlo simulation are described and the implications of skyshine in portal monitoring applications are discussed.
Sequence design in lattice models by graph theoretical methods
NASA Astrophysics Data System (ADS)
Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.
2001-01-01
A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).
Theoretical model of superconducting spintronic SIsFS devices
NASA Astrophysics Data System (ADS)
Bakurskiy, S. V.; Klenov, N. V.; Soloviev, I. I.; Bol'ginov, V. V.; Ryazanov, V. V.; Vernik, I. V.; Mukhanov, O. A.; Kupriyanov, M. Yu.; Golubov, A. A.
2013-05-01
Motivated by recent progress in the development of cryogenic memory compatible with single flux quantum (SFQ) circuits, we have performed a theoretical study of magnetic SIsFS Josephson junctions, where "S" is a bulk superconductor, "s" is a thin superconducting film, "F" is a metallic ferromagnet, and "I" is an insulator. We calculate the Josephson current as a function of s and F layers thickness, temperature, and exchange energy of F film. We outline several modes of operation of these junctions and demonstrate their unique ability to have large product of a critical current IC and a normal-state resistance RN in the π state, comparable to that in superconductor-insulator-superconductor tunnel junctions commonly used in SFQ circuits. We develop a model describing switching of the Josephson critical current in these devices by external magnetic field. The results are in good agreement with the experimental data for Nb-Al/AlOx-Nb-Pd0.99Fe0.01-Nb junctions.
Thermophotonic heat pump—a theoretical model and numerical simulations
NASA Astrophysics Data System (ADS)
Oksanen, Jani; Tulkki, Jukka
2010-05-01
We have recently proposed a solid state heat pump based on photon mediated heat transfer between two large-area light emitting diodes coupled by the electromagnetic field and enclosed in a semiconductor structure with a nearly homogeneous refractive index. Ideally the thermophotonic heat pump (THP) allows heat transfer at Carnot efficiency but in reality there are several factors that limit the efficiency. The efficient operation of the THP is based on the following construction factors and operational characteristics: (1) broad area semiconductor diodes to enable operation at optimal carrier density and high efficiency, (2) recycling of the energy of the emitted photons, (3) elimination of photon extraction losses by integrating the emitting and the absorbing diodes within a single semiconductor structure, and (4) eliminating the reverse thermal conduction by a nanometer scale vacuum layer between the diodes. In this paper we develop a theoretical model for the THP and study the fundamental physical limitations and potential of the concept. The results show that even when the most important losses of the THPs are accounted for, the THP has potential to outperform the thermoelectric coolers especially for heat transfer across large temperature differences and possibly even to compete with conventional small scale compressor based heat pumps.
Membranes and theoretical modeling of membrane distillation: a review.
Khayet, Mohamed
2011-05-11
Membrane distillation (MD) is one of the non-isothermal membrane separation processes used in various applications such desalination, environmental/waste cleanup, food, etc. It is known since 1963 and is still being developed at laboratory stage for different purposes and not fully implemented in industry. An abrupt increase in the number of papers on MD membrane engineering (i.e. design, fabrication and testing in MD) is seen since only 6 years ago. The present paper offers a comprehensive MD state-of-the-art review covering a wide range of commercial membranes, MD membrane engineering, their MD performance, transport mechanisms, experimental and theoretical modeling of different MD configurations as well as recent developments in MD. Improved MD membranes with specific morphology, micro- and nano-structures are highly demanded. Membranes with different pore sizes, porosities, thicknesses and materials as well as novel structures are required in order to carry out systematic MD studies for better understanding mass transport in different MD configurations, thereby improving the MD performance and looking for MD industrialization.
Strengthening Theoretical Testing in Criminology Using Agent-based Modeling
Groff, Elizabeth R.
2014-01-01
Objectives: The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity—agent-based computational modeling—that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Method: Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Results: Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Conclusion: Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs—not without its own issues—may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification. PMID:25419001
Theoretical models for the polarization of astronomical masers
NASA Astrophysics Data System (ADS)
Western, L. R.
Theoretical models for the creation of linear polarization in astronomical masers are developed. Equations are obtained that describe the transfer of the linearly polarized radiation in two and three dimensional astronomical masers. The transfer equations presented here include both polarization and intersecting maser rays. The transfer equations are integrated to find the intensity of radiation emitted by spheres, spherical shells and thin disks. The calculations show that apparent sizes due to beaming are still quite small and comparable to those obtained using the scalar molecular approximation. Long tails and substantial differences between the two linear polarizations occur in the angular distributions calculated here, especially for disk-like geometries, due to the effect of individual magnetic substates. Further calculations show that small anisotropies (approximately 10%) in the excitation can lead to very high linear polarization (approximately 90%) of the radiation from saturated astronomical masers. Separate calculations are performed for the transfer of the vibrational radiation of molecular SiO through a spherical gas shell. A major result is that magnetic fields alone can not account for the high linear polarization observations of the SiO v = 1, J = 2 ranges to 1 astronomical maser.
Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael
2014-05-01
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of
Theoretical modeling for radiofrequency ablation: state-of-the-art and challenges for the future
Berjano, Enrique J
2006-01-01
Radiofrequency ablation is an interventional technique that in recent years has come to be employed in very different medical fields, such as the elimination of cardiac arrhythmias or the destruction of tumors in different locations. In order to investigate and develop new techniques, and also to improve those currently employed, theoretical models and computer simulations are a powerful tool since they provide vital information on the electrical and thermal behavior of ablation rapidly and at low cost. In the future they could even help to plan individual treatment for each patient. This review analyzes the state-of-the-art in theoretical modeling as applied to the study of radiofrequency ablation techniques. Firstly, it describes the most important issues involved in this methodology, including the experimental validation. Secondly, it points out the present limitations, especially those related to the lack of an accurate characterization of the biological tissues. After analyzing the current and future benefits of this technique it finally suggests future lines and trends in the research of this area. PMID:16620380
NASA Astrophysics Data System (ADS)
Yang, H.-Y. Karen; Sutter, P. M.; Ricker, Paul M.
2012-12-01
Cosmological constraints derived from galaxy clusters rely on accurate predictions of cluster observable properties, in which feedback from active galactic nuclei (AGN) is a critical component. In order to model the physical effects due to supermassive black holes (SMBH) on cosmological scales, subgrid modelling is required, and a variety of implementations have been developed in the literature. However, theoretical uncertainties due to model and parameter variations are not yet well understood, limiting the predictive power of simulations including AGN feedback. By performing a detailed parameter-sensitivity study in a single cluster using several commonly adopted AGN accretion and feedback models with FLASH, we quantify the model uncertainties in predictions of cluster integrated properties. We find that quantities that are more sensitive to gas density have larger uncertainties (˜20 per cent for Mgas and a factor of ˜2 for LX at R500), whereas TX, YSZ and YX are more robust (˜10-20 per cent at R500). To make predictions beyond this level of accuracy would require more constraints on the most relevant parameters: the accretion model, mechanical heating efficiency and size of feedback region. By studying the impact of AGN feedback on the scaling relations, we find that an anti-correlation exists between Mgas and TX, which is another reason why YSZ and YX are excellent mass proxies. This anti-correlation also implies that AGN feedback is likely to be an important source of intrinsic scatter in the Mgas-TX and LX-TX relations.
ERIC Educational Resources Information Center
Markon, Kristian E.; Krueger, Robert F.
2006-01-01
Distinguishing between discrete and continuous latent variable distributions has become increasingly important in numerous domains of behavioral science. Here, the authors explore an information-theoretic approach to latent distribution modeling, in which the ability of latent distribution models to represent statistical information in observed…
Dai, Daoxin; He, Sailing
2004-12-01
An accurate two-dimensional (2D) model is introduced for the simulation of an arrayed-waveguide grating (AWG) demultiplexer by integrating the field distribution along the vertical direction. The equivalent 2D model has almost the same accuracy as the original three-dimensional model and is more accurate for the AWG considered here than the conventional 2D model based on the effective-index method. To further improve the computational efficiency, the reciprocity theory is applied to the optimal design of a flat-top AWG demultiplexer with a special input structure.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-06-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-08-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Watson, Charles M; Francis, Gamal R
2015-07-01
Hollow copper models painted to match the reflectance of the animal subject are standard in thermal ecology research. While the copper electroplating process results in accurate models, it is relatively time consuming, uses caustic chemicals, and the models are often anatomically imprecise. Although the decreasing cost of 3D printing can potentially allow the reproduction of highly accurate models, the thermal performance of 3D printed models has not been evaluated. We compared the cost, accuracy, and performance of both copper and 3D printed lizard models and found that the performance of the models were statistically identical in both open and closed habitats. We also find that 3D models are more standard, lighter, durable, and inexpensive, than the copper electroformed models. PMID:25965016
Doinikov, Alexander A; Bouakaz, Ayache
2015-10-01
A theoretical model is developed that describes nonlinear spherical pulsations and translational motions of two interacting bubbles at arbitrary separation distances between the bubbles. The derivation of the model is based on the multipole expansion of the bubble velocity potentials and the use of the Lagrangian formalism. The model consists of four coupled ordinary differential equations. Two of them are modified Rayleigh-Plesset equations for the radial oscillations of the bubbles and the other two describe the translational displacement of the bubble centers. The equations are not subject to the assumption that the distance between the bubbles is large compared to the bubble radii and hence make it possible to simulate the bubble dynamics starting from large separation distances up to contact between the bubbles providing that the deviation of the bubble shape from sphericity is negligible. Numerical simulations are carried out to demonstrate the capabilities of the developed model. It is shown that the correct modeling of the translational dynamics of the bubbles at small separation distances requires terms accurate up to ninth order in the inverse separation distance. Physical mechanisms are analyzed that lead to the change of the direction of the relative translational motion of the bubbles in finite-amplitude acoustic fields.
Towards accurate kinetic modeling of prompt NO formation in hydrocarbon flames via the NCN pathway
Sutton, Jeffrey A.; Fleming, James W.
2008-08-15
A basic kinetic mechanism that can predict the appropriate prompt-NO precursor NCN, as shown by experiment, with relative accuracy while still producing postflame NO results that can be calculated as accurately as or more accurately than through the former HCN pathway is presented for the first time. The basic NCN submechanism should be a starting point for future NCN kinetic and prompt NO formation refinement.
Myint, P. C.; Hao, Y.; Firoozabadi, A.
2015-03-27
Thermodynamic property calculations of mixtures containing carbon dioxide (CO_{2}) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO_{2} activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO_{2}, pure water, and both CO_{2}-rich and aqueous (H_{2}O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO_{2}. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H_{2}O-CO_{2}-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.
Daegling, D J; Hylander, W L
2000-08-01
Experimental studies and mathematical models are disparate approaches for inferring the stress and strain environment in mammalian jaws. Experimental designs offer accurate, although limited, characterization of biomechanical behavior, while mathematical approaches (finite element modeling in particular) offer unparalleled precision in depiction of strain magnitudes, directions, and gradients throughout the mandible. Because the empirical (experimental) and theoretical (mathematical) perspectives differ in their initial assumptions and their proximate goals, the two methods can yield divergent conclusions about how masticatory stresses are distributed in the dentary. These different sources of inference may, therefore, tangibly influence subsequent biological interpretation. In vitro observation of bone strain in primate mandibles under controlled loading conditions offers a test of finite element model predictions. Two issues which have been addressed by both finite element models and experimental approaches are: (1) the distribution of torsional shear strains in anthropoid jaws and (2) the dissipation of bite forces in the human alveolar process. Not surprisingly, the experimental data and mathematical models agree on some issues, but on others exhibit discordance. Achieving congruence between these methods is critical if the nature of the relationship of masticatory stress to mandibular form is to be intelligently assessed. A case study of functional/mechanical significance of gnathic morphology in the hominid genus Paranthropus offers insight into the potential benefit of combining theoretical and experimental approaches. Certain finite element analyses claim to have identified a biomechanical problem unrecognized in previous comparative work, which, in essence, is that the enlarged transverse dimensions of the postcanine corpus may have a less important role in resisting torsional stresses than previously thought. Experimental data have identified
Daegling, D J; Hylander, W L
2000-08-01
Experimental studies and mathematical models are disparate approaches for inferring the stress and strain environment in mammalian jaws. Experimental designs offer accurate, although limited, characterization of biomechanical behavior, while mathematical approaches (finite element modeling in particular) offer unparalleled precision in depiction of strain magnitudes, directions, and gradients throughout the mandible. Because the empirical (experimental) and theoretical (mathematical) perspectives differ in their initial assumptions and their proximate goals, the two methods can yield divergent conclusions about how masticatory stresses are distributed in the dentary. These different sources of inference may, therefore, tangibly influence subsequent biological interpretation. In vitro observation of bone strain in primate mandibles under controlled loading conditions offers a test of finite element model predictions. Two issues which have been addressed by both finite element models and experimental approaches are: (1) the distribution of torsional shear strains in anthropoid jaws and (2) the dissipation of bite forces in the human alveolar process. Not surprisingly, the experimental data and mathematical models agree on some issues, but on others exhibit discordance. Achieving congruence between these methods is critical if the nature of the relationship of masticatory stress to mandibular form is to be intelligently assessed. A case study of functional/mechanical significance of gnathic morphology in the hominid genus Paranthropus offers insight into the potential benefit of combining theoretical and experimental approaches. Certain finite element analyses claim to have identified a biomechanical problem unrecognized in previous comparative work, which, in essence, is that the enlarged transverse dimensions of the postcanine corpus may have a less important role in resisting torsional stresses than previously thought. Experimental data have identified
A theoretical model of grainsize evolution during deformation
NASA Astrophysics Data System (ADS)
Ricard, Y.; Bercovici, D.; Rozel, A.
2007-12-01
Lithospheric shear localization, as occurs in the formation of tectonic plate boundaries, is often associated with diminished grainsize (e.g., mylonites). Grainsize reduction is typically attributed to dynamic recrystallization; however, theoretical models of shear-localization arising from this hypothesis are problematic since (1) they require the simultaneous action of two exclusive creep mechanisms (diffusion and dislocation creep), and (2) the grain-growth ("healing") laws employed by these models are derived from static grain-growth or coarsening theory, although the shear-localization setting itself is far from static equilibrium. We present a new first-principles grained-continuum theory which accounts for both coarsening and damage-induced grainsize reduction. Damage per se is the generic process for generation of microcracks, defects, dislocations (including recrystallization), subgrains, nucleii and cataclastic breakdown of grains. The theory contains coupled statistical grain-scale and continuum macroscopic components. The grain-scale element of the theory prescribes both the evolution of the grainsize distribution, and a phenomenological grain-growth law derived from non-equilibrium thermodynamics; grain-growth thus incorporates the free energy differences between grains, including both grain-boundary surface energy (which controls coarsening) and the contribution of deformational work to these free energiesconservation and positivity of entropy production provide the phenomenological law for the statistical grain-growth law. We identify four potential mechanisms that affect the distribution of grainsize; two of them conserve the number of grains but change their relative masses and two of them change the number of grains by sticking them together or breaking them. In the limit of static equilibrium, only the two mechanisms that increase the average grainsize are allowed by the second law of thermodynamics. The first one is a diffusive mass transport
A theoretical microbial contamination model for a human Mars mission
NASA Astrophysics Data System (ADS)
Lupisella, Mark Lewis
Contamination from a human presence on Mars could significantly compromise the search for extraterrestrial life. In particular, the difficulties in controlling microbial contamination, the potential for terrestrial microbes to grow, evolve, compete, and modify the Martian environment, and the likely microbial nature of putative Martian life, make microbial contamination worthy of focus as we begin to plan for a human mission to Mars. This dissertation describes a relatively simple theoretical model that can be used to explore how microbial contamination from a human Mars mission might survive and grow in the Martian soil environment surrounding a habitat. A user interface has been developed to allow a general practitioner to choose values and functions for almost all parameters ranging from the number of astronauts to the half-saturation constants for microbial growth. Systematic deviations from a baseline set of parameter values are explored as potential plausible scenarios for the first human Mars missions. The total viable population and population density are the primary state variables of interest, but other variables such as the total number of births and total dead and viable microbes are also tracked. The general approach was to find the most plausible parameter value combinations that produced a population density of 1 microbe/cm3 or greater, a threshold that was used to categorize the more noteworthy populations for subsequent analysis. Preliminary assessments indicate that terrestrial microbial contamination resulting from leakage from a limited human mission (perhaps lasting up to 5 months) will not likely become a problematic population in the near-term as long as reasonable contamination control measures are implemented (for example, a habitat leak rate no greater than 1% per hour). However, there appear to be plausible, albeit unlikely, scenarios that could cause problematic populations, depending in part on (a) the initial survival fraction and
Presenting a Theoretical Model of Four Conceptions of Civic Education
ERIC Educational Resources Information Center
Cohen, Aviv
2010-01-01
This conceptual study will question the ways different epistemological conceptions of citizenship and education influence the characteristics of civic education. While offering a new theoretical framework, the different undercurrent conceptions that lay at the base of the civic education process shall be brought forth. With the use of the method…
Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.
2015-07-28
The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.
Theoretical Model to Explain Excess of Quasiparticles in Superconductors.
Bespalov, Anton; Houzet, Manuel; Meyer, Julia S; Nazarov, Yuli V
2016-09-01
Experimentally, the concentration of quasiparticles in gapped superconductors always largely exceeds the equilibrium one at low temperatures. Since these quasiparticles are detrimental for many applications, it is important to understand theoretically the origin of the excess. We demonstrate in detail that the dynamics of quasiparticles localized at spatial fluctuations of the gap edge becomes exponentially slow. This gives rise to the observed excess in the presence of a vanishingly weak nonequilibrium agent. PMID:27661716
The Synthesis of a Theoretical Model of Student Attrition.
ERIC Educational Resources Information Center
Bean, John P.
Models that have appeared in the student attrition literature in the past decade and behavioral models from the social sciences that may help explain the dropout process are examined, and an attempt is made to synthesize a causal model of student attrition. The models of Tinto, Spady, and Rootman in the area of student attrition, and models of…
A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.
ERIC Educational Resources Information Center
Chambers, Jay G.
This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…
Improving Mathematics Instruction through Lesson Study: A Theoretical Model and North American Case
ERIC Educational Resources Information Center
Lewis, Catherine C.; Perry, Rebecca R.; Hurd, Jacqueline
2009-01-01
This article presents a theoretical model of lesson study, an approach to instructional improvement that originated in Japan. The theoretical model includes four lesson study features (investigation, planning, research lesson, and reflection) and three pathways through which lesson study improves instruction: changes in teachers' knowledge and…
ERIC Educational Resources Information Center
Dziedziewicz, Dorota; Karwowski, Maciej
2015-01-01
This paper presents a new theoretical model of creative imagination and its applications in early education. The model sees creative imagination as composed of three inter-related components: vividness of images, their originality, and the level of transformation of imageries. We explore the theoretical and practical consequences of this new…
A theoretical model of phase changes of a klystron due to variation of operating parameters
NASA Technical Reports Server (NTRS)
Kupiszewski, A.
1980-01-01
A mathematical model for phase changes of the VA-876 CW klystron amplifier output is presented and variations of several operating parameters are considered. The theoretical approach to the problem is based upon a gridded gap modeling with inclusion of a second order correction term so that actual gap geometry is reflected in the formulation. Physical measurements are contrasted to theoretical calculations.
NASA Astrophysics Data System (ADS)
Seino, Junji; Tarumi, Moto; Nakai, Hiromi
2014-01-01
This Letter proposes an accurate scheme using frozen core orbitals, called the frozen core potential (FCP) method, to theoretically connect model potential calculations to all-electron (AE) ones. The present scheme is based on the Huzinaga-Cantu equation combined with spin-free relativistic Douglas-Kroll-Hess Hamiltonians. The local unitary transformation scheme for efficiently constructing the Hamiltonian produces a seamless extension to the FCP method in a relativistic framework. Numerical applications to coinage diatomic molecules illustrate the high accuracy of this FCP method, as compared to AE calculations. Furthermore, the efficiency of the FCP method is also confirmed by these calculations.
A theoretical model of barriers having inhomogeneous impedance surfaces.
Wang, Xu; Wang, Xiaonan; Yu, Wuzhou; Jiang, Zaixiu; Mao, Dongxing
2016-03-01
When barriers are placed in parallel on opposite sides of a source, their performance deteriorates markedly. However, barriers made from materials of inhomogeneous impedance eliminate this drawback by altering the behavior of sound as it undergoes multiple reflections between the barriers. In this paper, a theoretical approach is carried out to estimate the performance of the proposed barriers. By combining the ray-tracing method and sound diffraction theory, the existence of different ray paths between the proposed barriers is revealed. Compared to conventional rigid-walled barriers, barriers having inhomogeneous surfaces may have the potential to be widely used in environmental noise control. PMID:27036289
NASA Technical Reports Server (NTRS)
Raj. Sai V.
2011-01-01
Establishing the geometry of foam cells is useful in developing microstructure-based acoustic and structural models. Since experimental data on the geometry of the foam cells are limited, most modeling efforts use an idealized three-dimensional, space-filling Kelvin tetrakaidecahedron. The validity of this assumption is investigated in the present paper. Several FeCrAlY foams with relative densities varying between 3 and 15% and cells per mm (c.p.mm.) varying between 0.2 and 3.9 c.p.mm. were microstructurally evaluated. The number of edges per face for each foam specimen was counted by approximating the cell faces by regular polygons, where the number of cell faces measured varied between 207 and 745. The present observations revealed that 50-57% of the cell faces were pentagonal while 24-28% were quadrilateral and 15-22% were hexagonal. The present measurements are shown to be in excellent agreement with literature data. It is demonstrated that the Kelvin model, as well as other proposed theoretical models, cannot accurately describe the FeCrAlY foam cell structure. Instead, it is suggested that the ideal foam cell geometry consists of 11 faces with 3 quadrilateral, 6 pentagonal faces and 2 hexagonal faces consistent with the 3-6-2 Matzke cell
NASA Technical Reports Server (NTRS)
Raj, S. V.
2010-01-01
Establishing the geometry of foam cells is useful in developing microstructure-based acoustic and structural models. Since experimental data on the geometry of the foam cells are limited, most modeling efforts use the three-dimensional, space-filling Kelvin tetrakaidecahedron. The validity of this assumption is investigated in the present paper. Several FeCrAlY foams with relative densities varying between 3 and 15 percent and cells per mm (c.p.mm.) varying between 0.2 and 3.9 c.p.mm. were microstructurally evaluated. The number of edges per face for each foam specimen was counted by approximating the cell faces by regular polygons, where the number of cell faces measured varied between 207 and 745. The present observations revealed that 50 to 57 percent of the cell faces were pentagonal while 24 to 28 percent were quadrilateral and 15 to 22 percent were hexagonal. The present measurements are shown to be in excellent agreement with literature data. It is demonstrated that the Kelvin model, as well as other proposed theoretical models, cannot accurately describe the FeCrAlY foam cell structure. Instead, it is suggested that the ideal foam cell geometry consists of 11 faces with 3 quadrilateral, 6 pentagonal faces and 2 hexagonal faces consistent with the 3-6-2 cell.
NASA Astrophysics Data System (ADS)
Speranskiy, Kirill; Kurnikova, Maria
2004-07-01
We propose a hierarchical approach to model vibrational frequencies of a ligand in a strongly fluctuating inhomogeneous environment such as a liquid solution or when bound to a macromolecule, e.g., a protein. Vibrational frequencies typically measured experimentally are ensemble averaged quantities which result (in part) from the influence of the strongly fluctuating solvent. Solvent fluctuations can be sampled effectively by a classical molecular simulation, which in our model serves as the first, low level of the hierarchy. At the second high level of the hierarchy a small subset of system coordinates is used to construct a patch of the potential surface (ab initio) relevant to the vibration in question. This subset of coordinates is under the influence of an instantaneous external force exerted by the environment. The force is calculated at the lower level of the hierarchy. The proposed methodology is applied to model vibrational frequencies of a glutamate in water and when bound to the Glutamate receptor protein and its mutant. Our results are in close agreement with the experimental values and frequency shifts measured by the Jayaraman group by the Fourier transform infrared spectroscopy [Q. Cheng et al., Biochem. 41, 1602 (2002)]. Our methodology proved useful in successfully reproducing vibrational frequencies of a ligand in such a soft, flexible, and strongly inhomogeneous protein as the Glutamate receptor.
Theoretical modelling of exchange interactions in metal-phthalocyanines
NASA Astrophysics Data System (ADS)
Wu, Wei; Fisher, Andrew; Harrison, Nic; Serri, Michele; Wu, Zhenlin; Heutz, Sandrine; Jones, Tim; Aeppli, Gabriel
2012-02-01
The theoretical understanding of exchange interactions in organics provides a key foundation for quantum molecular magnetism. Recent SQUID magnetometry of a well know organic semiconductor, copper-phthalocyanine [1,2] (CuPc) shows that it forms quasi-one-dimensional spin chains. Green's function perturbation theory calculation [3] is used to find the dominant exchange mechanism. Hybrid density functional theory simulations [4] give a quantitative insight to exchange interactions and electronic structures. Both calculations are performed for different stacking and sliding angles for lithium-Pc, cobalt-Pc, chromium-Pc, and copper-Pc. The exchange interactions depend strongly on stacking angles, but weakly on sliding angles. Our results qualitatively agree with the experiments, and remarkably α-cobalt-Pc has a very large exchange above liquid-Nitrogen temperature. Our theoretical predictions on the exchange interactions can guide experimentalists to design novel organic semiconductors. [0pt] [1] S. Heutz, et. al., Adv. Mat., 19, 3618 (2007) [2] Hai Wang, et. al., ACS Nano, 4, 3921 (2010) [3] Wei Wu, et. al., Phys. Rev. B 77, 184403 (2008) [4] Wei Wu, et. al., Phys. Rev. B 84, 024427 (2011)
Information-theoretic model comparison unifies saliency metrics
Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias
2015-01-01
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340
Information-theoretic model comparison unifies saliency metrics.
Kümmerer, Matthias; Wallis, Thomas S A; Bethge, Matthias
2015-12-29
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed "saliency" prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a "saliency map" entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use.
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-28
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.
NASA Astrophysics Data System (ADS)
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-01
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed "pressure-matching" variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the "simplicity" of the model.
College Students Solving Chemistry Problems: A Theoretical Model of Expertise
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Glynn, Shawn M.
2009-01-01
A model of expertise in chemistry problem solving was tested on undergraduate science majors enrolled in a chemistry course. The model was based on Anderson's "Adaptive Control of Thought-Rational" (ACT-R) theory. The model shows how conceptualization, self-efficacy, and strategy interact and contribute to the successful solution of quantitative,…
ERIC Educational Resources Information Center
Kim, Young Rae
2013-01-01
A theoretical model of metacognition in complex modeling activities has been developed based on existing frameworks, by synthesizing the re-conceptualization of metacognition at multiple levels by looking at the three sources that trigger metacognition. Using the theoretical model as a framework, this study was designed to explore how students'…
[Nursing practice based on theoretical models: a qualitative study of nurses' perception].
Amaducci, Giovanna; Iemmi, Marina; Prandi, Marzia; Saffioti, Angelina; Carpanoni, Marika; Mecugni, Daniela
2013-01-01
Many faculty argue that theory and theorizing are closely related to the clinical practice, that the disciplinary knowledge grows, more relevantly, from the specific care context in which it takes place and, moreover, that knowledge does not proceed only by the application of general principles of the grand theories to specific cases. Every nurse, in fact, have a mental model, of what may or may not be aware, that motivate and substantiate every action and choice of career. The study describes what the nursing theoretical model is; the mental model and the tacit knowledge underlying it. It identifies the explicit theoretical model of the professional group that rapresents nursing partecipants, aspects of continuity with the theoretical model proposed by this degree course in Nursing.. Methods Four focus groups were made which were attended by a total of 22 nurses, rapresentatives of almost every Unit of Reggio Emilia Hospital's. We argue that the theoretical nursing model of each professional group is the result of tacit knowledge, which help to define the personal mental model, and the theoretical model, which explicitly underlying theoretical content learned applied consciously and reverted to / from nursing practice. Reasoning on the use of theory in practice has allowed us to give visibility to a theoretical model explicitly nursing authentically oriented to the needs of the person, in all its complexity in specific contexts.
Theoretical and experimental modeling of a rail gun accelerator
NASA Astrophysics Data System (ADS)
Zheleznyj, V. B.; Zagorskij, A. V.; Katsnel'Son, S. S.; Kudryavtsev, A. V.; Plekhanov, A. V.
1993-04-01
Results of a series of experiments in the acceleration of macrobodies are analyzed using an integral model of a current arc and a quasi-1D magnetic gasdynamic model. The integral model uses gasdynamic equations averaged by the size of a plasma pump and equations based on the second Kirchhoff's law for electrical current. The quasi-1D model is based on 1D magnetic gasdynamic equations for mean values of density, pressure, velocity, and internal power. Electromagnetic parameters are determined from Maxwell integral equations. It is concluded that the proposed models take into account the major mechanisms of momentum loss and are capable of adequately describing electromagnetic rail accelerators.
Surface electron density models for accurate ab initio molecular dynamics with electronic friction
NASA Astrophysics Data System (ADS)
Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.
2016-06-01
Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.
Accurate cortical tissue classification on MRI by modeling cortical folding patterns.
Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea
2015-09-01
Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery.
Accurate cortical tissue classification on MRI by modeling cortical folding patterns.
Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea
2015-09-01
Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453
Psychosocial stress and prostate cancer: a theoretical model.
Ellison, G L; Coker, A L; Hebert, J R; Sanderson, S M; Royal, C D; Weinrich, S P
2001-01-01
African-American men are more likely to develop and die from prostate cancer than are European-American men; yet, factors responsible for the racial disparity in incidence and mortality have not been elucidated. Socioeconomic disadvantage is more prevalent among African-American than among European-American men. Socioeconomic disadvantage can lead to psychosocial stress and may be linked to negative lifestyle behaviors. Regardless of socioeconomic position, African-American men routinely experience racism-induced stress. We propose a theoretical framework for an association between psychosocial stress and prostate cancer. Within the context of history and culture, we further propose that psychosocial stress may partially explain the variable incidence of prostate cancer between these diverse groups. Psychosocial stress may negatively impact the immune system leaving the individual susceptible to malignancies. Behavioral responses to psychosocial stress are amenable to change. If psychosocial stress is found to negatively impact prostate cancer risk, interventions may be designed to modify reactions to environmental demands.
A graph theoretical perspective of a drug abuse epidemic model
NASA Astrophysics Data System (ADS)
Nyabadza, F.; Mukwembi, S.; Rodrigues, B. G.
2011-05-01
A drug use epidemic can be represented by a finite number of states and transition rules that govern the dynamics of drug use in each discrete time step. This paper investigates the spread of drug use in a community where some users are in treatment and others are not in treatment, citing South Africa as an example. In our analysis, we consider the neighbourhood prevalence of each individual, i.e., the proportion of the individual’s drug user contacts who are not in treatment amongst all of his or her contacts. We introduce parameters α∗, β∗ and γ∗, depending on the neighbourhood prevalence, which govern the spread of drug use. We examine how changes in α∗, β∗ and γ∗ affect the system dynamics. Simulations presented support the theoretical results.
Theoretical modelling of the semiconductor-electrolyte interface
NASA Astrophysics Data System (ADS)
Schelling, Patrick Kenneth
We have developed tight-binding models of transition metal oxides. In contrast to many tight-binding models, these models include a description of electron-electron interactions. After parameterizing to bulk first-principles calculations, we demonstrated the transferability of the model by calculating atomic and electronic structure of rutile surfaces, which compared well with experiment and first-principles calculations. We also studied the structure of twist grain boundaries in rutile. Molecular dynamics simulations using the model were also carried out to describe polaron localization. We have also demonstrated that tight-binding models can be constructed to describe metallic systems. The computational cost tight-binding simulations was greatly reduced by incorporating O(N) electronic structure methods. We have also interpreted photoluminesence experiments on GaAs electrodes in contact with an electrolyte using drift-diffusion models. Electron transfer velocities were obtained by fitting to experimental results.
NASA Astrophysics Data System (ADS)
Koo, Jeong Seo; Choi, Se Young
2012-06-01
A theoretical method is proposed to predict and evaluate collision-induced derailments of rolling stock by using a simplified wheelset model and is verified with dynamic simulations. Because the impact forces occurring during collision are transmitted from the car body to the bogies and axles through suspensions, rolling stock leads to derailment as a result of the combination of horizontal and vertical impact forces applied to the axle and a simplified wheelset model enforced at the axle can be used to theoretically formulate derailment behaviors. The derailment type depends on the combination of the horizontal and vertical forces, the flange angle and the friction coefficient. According to collision conditions, wheel-climb, wheel-lift or roll-over derailment can occur between the wheel and the rail. In this theoretical derailment model of a simplified wheelset, the derailment types are classified as Slip-up, Slip/roll-over, Climb-up, Climb/roll-over and pure Roll-over according to the derailment mechanisms between the wheel and the rail and the theoretical conditions needed to generate each derailment mechanism are proposed. The theoretical wheelset model is verified by dynamic simulation and its applicability is demonstrated by comparing the simulation results of the theoretical wheelset model with those of an actual wheelset model. The theoretical derailment wheelset model is in good agreement with the virtual testing model simulation for a collision-induced derailment of rolling stock.
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
ERIC Educational Resources Information Center
Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.
2011-01-01
Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…
Theoretical Models for Application in School Health Education Research.
ERIC Educational Resources Information Center
Parcel, Guy S.
1984-01-01
Selected behavioral change theories, multiple theory models, and teaching models that may be useful to research studies in health education are examined in this article. A brief outline of applications of theory for the field of school health education is offered. (Author/DF)
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
Johnson, Timothy C.; Wellman, Dawn M.
2015-06-26
Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.
NASA Astrophysics Data System (ADS)
Toyokuni, Genti; Takenaka, Hiroshi
2012-06-01
We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic
Theoretical Tools in Modeling Communication and Language Dynamics
NASA Astrophysics Data System (ADS)
Loreto, Vittorio
Statistical physics has proven to be a very fruitful framework to describe phenomena outside the realm of traditional physics. In social phenomena, the basic constituents are not particles but humans and every individual interacts with a limited number of peers, usually negligible compared to the total number of people in the system. In spite of that, human societies are characterized by stunning global regularities that naturally call for a statistical physics approach to social behavior, i.e., the attempt to understand regularities at large scale as collective effects of the interaction among single individuals, considered as relatively simple entities. This is the paradigm of Complex Systems: an assembly of many interacting (and simple) units whose collective behavior is not trivially deducible from the knowledge of the rules governing their mutual interactions. In this chapter we review the main theoretical concepts and tools that physics can borrow to socially-motivated problems. Despite their apparent diversity, most research lines in social dynamics are actually closely connected from the point of view of both the methodologies employed and, more importantly, of the general phenomenological questions, e.g., what are the fundamental interaction mechanisms leading to the emergence of consensus on an issue, a shared culture, a common language or a collective motion?
Psychosocial stress and prostate cancer: a theoretical model.
Ellison, G L; Coker, A L; Hebert, J R; Sanderson, S M; Royal, C D; Weinrich, S P
2001-01-01
African-American men are more likely to develop and die from prostate cancer than are European-American men; yet, factors responsible for the racial disparity in incidence and mortality have not been elucidated. Socioeconomic disadvantage is more prevalent among African-American than among European-American men. Socioeconomic disadvantage can lead to psychosocial stress and may be linked to negative lifestyle behaviors. Regardless of socioeconomic position, African-American men routinely experience racism-induced stress. We propose a theoretical framework for an association between psychosocial stress and prostate cancer. Within the context of history and culture, we further propose that psychosocial stress may partially explain the variable incidence of prostate cancer between these diverse groups. Psychosocial stress may negatively impact the immune system leaving the individual susceptible to malignancies. Behavioral responses to psychosocial stress are amenable to change. If psychosocial stress is found to negatively impact prostate cancer risk, interventions may be designed to modify reactions to environmental demands. PMID:11572415
Experimental observations and theoretical models for beam-beam phenomena
Kheifets, S.
1981-03-01
The beam-beam interaction in storage rings exhibits all the characteristics of nonintegrable dynamical systems. Here one finds all kinds of resonances, closed orbits, stable and unstable fixed points, stochastic layers, chaotic behavior, diffusion, etc. The storage ring itself being an expensive device nevertheless while constructed and put into operation presents a good opportunity of experimentally studying the long-time behavior of both conservative (proton machines) and nonconservative (electron machines) dynamical systems - the number of bunch-bunch interactions routinely reaches values of 10/sup 10/-10/sup 11/ and could be increased by decreasing the beam current. At the same time the beam-beam interaction puts practical limits for the yield of the storage ring. This phenomenon not only determines the design value of main storage ring parameters (luminosity, space charge parameters, beam current), but also in fact prevents many of the existing storage rings from achieving design parameters. Hence, the problem has great practical importance along with its enormous theoretical interest. A brief overview of the problem is presented.
Ignition temperature of magnesium powder clouds: a theoretical model.
Chunmiao, Yuan; Chang, Li; Gang, Li; Peihong, Zhang
2012-11-15
Minimum ignition temperature of dust clouds (MIT-DC) is an important consideration when adopting explosion prevention measures. This paper presents a model for determining minimum ignition temperature for a magnesium powder cloud under conditions simulating a Godbert-Greenwald (GG) furnace. The model is based on heterogeneous oxidation of metal particles and Newton's law of motion, while correlating particle size, dust concentration, and dust dispersion pressure with MIT-DC. The model predicted values in close agreement with experimental data and is especially useful in predicting temperature and velocity change as particles pass through the furnace tube.
Theoretical study of gas hydrate decomposition kinetics: model predictions.
Windmeier, Christoph; Oellrich, Lothar R
2013-11-27
In order to provide an estimate of intrinsic gas hydrate dissolution and dissociation kinetics, the Consecutive Desorption and Melting Model (CDM) was developed in a previous publication (Windmeier, C.; Oellrich, L. R. J. Phys. Chem. A 2013, 117, 10151-10161). In this work, an extensive summary of required model data is given. Obtained model predictions are discussed with respect to their temperature dependence as well as their significance for technically relevant areas of gas hydrate decomposition. As a result, an expression for determination of the intrinsic gas hydrate decomposition kinetics for various hydrate formers is given together with an estimate for the maximum possible rates of gas hydrate decomposition. PMID:24199870
Theoretical model of impact damage in structural ceramics
NASA Technical Reports Server (NTRS)
Liaw, B. M.; Kobayashi, A. S.; Emery, A. G.
1984-01-01
This paper presents a mechanistically consistent model of impact damage based on elastic failures due to tensile and shear overloading. An elastic axisymmetric finite element model is used to determine the dynamic stresses generated by a single particle impact. Local failures in a finite element are assumed to occur when the primary/secondary principal stresses or the maximum shear stress reach critical tensile or shear stresses, respectively. The succession of failed elements thus models macrocrack growth. Sliding motions of cracks, which closed during unloading, are resisted by friction and the unrecovered deformation represents the 'plastic deformation' reported in the literature. The predicted ring cracks on the contact surface, as well as the cone cracks, median cracks, radial cracks, lateral cracks, and damage-induced porous zones in the interior of hot-pressed silicon nitride plates, matched those observed experimentally. The finite element model also predicted the uplifting of the free surface surrounding the impact site.
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
NASA Astrophysics Data System (ADS)
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
Theoretical models for duct acoustic propagation and radiation
NASA Technical Reports Server (NTRS)
Eversman, Walter
1991-01-01
The development of computational methods in acoustics has led to the introduction of analysis and design procedures which model the turbofan inlet as a coupled system, simultaneously modeling propagation and radiation in the presence of realistic internal and external flows. Such models are generally large, require substantial computer speed and capacity, and can be expected to be used in the final design stages, with the simpler models being used in the early design iterations. Emphasis is given to practical modeling methods that have been applied to the acoustical design problem in turbofan engines. The mathematical model is established and the simplest case of propagation in a duct with hard walls is solved to introduce concepts and terminologies. An extensive overview is given of methods for the calculation of attenuation in uniform ducts with uniform flow and with shear flow. Subsequent sections deal with numerical techniques which provide an integrated representation of duct propagation and near- and far-field radiation for realistic geometries and flight conditions.
Design theoretic analysis of three system modeling frameworks.
McDonald, Michael James
2007-05-01
This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.
Theoretical and computational models of biological ion channels
NASA Astrophysics Data System (ADS)
Roux, Benoit
2004-03-01
A theoretical framework for describing ion conduction through biological molecular pores is established and explored. The framework is based on a statistical mechanical formulation of the transmembrane potential (1) and of the equilibrium multi-ion potential of mean forces through selective ion channels (2). On the basis of these developments, it is possible to define computational schemes to address questions about the non-equilibrium flow of ions through ion channels. In the case of narrow channels (gramicidin or KcsA), it is possible to characterize the ion conduction in terms of the potential of mean force of the ions along the channel axis (i.e., integrating out the off-axis motions). This has been used for gramicidin (3) and for KcsA (4,5). In the case of wide pores (i.e., OmpF porin), this is no longer a good idea, but it is possible to use a continuum solvent approximations. In this case, a grand canonical monte carlo brownian dynamics algorithm was constructed for simulating the non-equilibrium flow of ions through wide pores. The results were compared with those from the Poisson-Nernst-Planck mean-field electrodiffusion theory (6-8). References; 1. B. Roux, Biophys. J. 73:2980-2989 (1997); 2. B. Roux, Biophys. J. 77, 139-153 (1999); 3. Allen, Andersen and Roux, PNAS (2004, in press); 4. Berneche and Roux. Nature, 414:73-77 (2001); 5. Berneche and Roux. PNAS, 100:8644-8648 (2003); 6. W. Im and S. Seefeld and B. Roux, Biophys. J. 79:788-801 (2000); 7. W. Im and B. Roux, J. Chem. Phys. 115:4850-4861 (2001); 8. W. Im and B. Roux, J. Mol. Biol. 322:851-869 (2002).
Ray-theoretical modeling of secondary microseism P-waves
NASA Astrophysics Data System (ADS)
Farra, V.; Stutzmann, E.; Gualtieri, L.; Schimmel, M.; Ardhuin, F.
2016-06-01
Secondary microseism sources are pressure fluctuations close to the ocean surface. They generate acoustic P-waves that propagate in water down to the ocean bottom where they are partly reflected, and partly transmitted into the crust to continue their propagation through the Earth. We present the theory for computing the displacement power spectral density of secondary microseism P-waves recorded by receivers in the far field. In the frequency domain, the P-wave displacement can be modeled as the product of (1) the pressure source, (2) the source site effect that accounts for the constructive interference of multiply reflected P-waves in the ocean, (3) the propagation from the ocean bottom to the stations, (4) the receiver site effect. Secondary microseism P-waves have weak amplitudes, but they can be investigated by beamforming analysis. We validate our approach by analyzing the seismic signals generated by Typhoon Ioke (2006) and recorded by the Southern California Seismic Network. Back projecting the beam onto the ocean surface enables to follow the source motion. The observed beam centroid is in the vicinity of the pressure source derived from the ocean wave model WAVEWATCH IIIR. The pressure source is then used for modeling the beam and a good agreement is obtained between measured and modeled beam amplitude variation over time. This modeling approach can be used to invert P-wave noise data and retrieve the source intensity and lateral extent.
Information-Theoretic Modeling of Trichromacy Coding of Light Spectrum
NASA Astrophysics Data System (ADS)
Benoit, Landry; Belin, Étienne; Rousseau, David; Chapeau-Blondeau, François
2014-07-01
Trichromacy is the representation of a light spectrum by three scalar coordinates. Such representation is universally implemented by the human visual system and by RGB (Red Green Blue) cameras. We propose here an informational model for trichromacy. Based on a statistical analysis of the dynamics of individual photons, the model demonstrates a possibility for describing trichromacy as an information channel, for which the input-output mutual information can be computed to serve as a measure of performance. The capabilities and significance of the informational model are illustrated and motivated in various situations. The model especially enables an assessment of the influence of the spectral sensitivities of the three types of photodetectors realizing the trichromatic representation. It provides a criterion to optimize possibly adjustable parameters of the spectral sensitivities such as their center wavelength, spectral width or magnitude. The model shows, for instance, the usefulness of some overlap with smooth graded spectral sensitivities, as observed for instance in the human retina. The approach also, starting from hyperspectral images with high spectral resolution measured in the laboratory, can be used to devise low-cost trichromatic imaging systems optimized for observation of specific spectral signatures. This is illustrated with an example from plant science, and demonstrates a potential of application especially to life sciences. The approach particularizes connections between physics, biophysics and information theory.
Ray-theoretical modeling of secondary microseism P waves
NASA Astrophysics Data System (ADS)
Farra, V.; Stutzmann, E.; Gualtieri, L.; Schimmel, M.; Ardhuin, F.
2016-09-01
Secondary microseism sources are pressure fluctuations close to the ocean surface. They generate acoustic P waves that propagate in water down to the ocean bottom where they are partly reflected and partly transmitted into the crust to continue their propagation through the Earth. We present the theory for computing the displacement power spectral density of secondary microseism P waves recorded by receivers in the far field. In the frequency domain, the P-wave displacement can be modeled as the product of (1) the pressure source, (2) the source site effect that accounts for the constructive interference of multiply reflected P waves in the ocean, (3) the propagation from the ocean bottom to the stations and (4) the receiver site effect. Secondary microseism P waves have weak amplitudes, but they can be investigated by beamforming analysis. We validate our approach by analysing the seismic signals generated by typhoon Ioke (2006) and recorded by the Southern California Seismic Network. Backprojecting the beam onto the ocean surface enables to follow the source motion. The observed beam centroid is in the vicinity of the pressure source derived from the ocean wave model WAVEWATCH IIIR. The pressure source is then used for modeling the beam and a good agreement is obtained between measured and modeled beam amplitude variation over time. This modeling approach can be used to invert P-wave noise data and retrieve the source intensity and lateral extent.
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
Accurate calculation of binding energies for molecular clusters - Assessment of different models
NASA Astrophysics Data System (ADS)
Friedrich, Joachim; Fiedler, Benjamin
2016-06-01
In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are <1 kJ/mol. Since they are smaller than the basis set errors, we obtain higher total accuracy due to the applicability of larger basis sets. The final CCSD(T)/CBS benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.
D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo
2014-12-28
A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.
Theoretical modeling of electron mobility in superfluid 4He
NASA Astrophysics Data System (ADS)
Aitken, Frédéric; Bonifaci, Nelly; von Haeften, Klaus; Eloranta, Jussi
2016-07-01
The Orsay-Trento bosonic density functional theory model is extended to include dissipation due to the viscous response of superfluid 4He present at finite temperatures. The viscous functional is derived from the Navier-Stokes equation by using the Madelung transformation and includes the contribution of interfacial viscous response present at the gas-liquid boundaries. This contribution was obtained by calibrating the model against the experimentally determined electron mobilities from 1.2 K to 2.1 K along the saturated vapor pressure line, where the viscous response is dominated by thermal rotons. The temperature dependence of ion mobility was calculated for several different solvation cavity sizes and the data are rationalized in the context of roton scattering and Stokes limited mobility models. Results are compared to the experimentally observed "exotic ion" data, which provides estimates for the corresponding bubble sizes in the liquid. Possible sources of such ions are briefly discussed.
A control theoretic model of driver steering behavior
NASA Technical Reports Server (NTRS)
Donges, E.
1977-01-01
A quantitative description of driver steering behavior such as a mathematical model is presented. The steering task is divided into two levels: (1) the guidance level involving the perception of the instantaneous and future course of the forcing function provided by the forward view of the road, and the response to it in an anticipatory open-loop control mode; (2) the stabilization level whereby any occuring deviations from the forcing function are compensated for in a closed-loop control mode. This concept of the duality of the driver's steering activity led to a newly developed two-level model of driver steering behavior. Its parameters are identified on the basis of data measured in driving simulator experiments. The parameter estimates of both levels of the model show significant dependence on the experimental situation which can be characterized by variables such as vehicle speed and desired path curvature.
Theoretical modeling of electron mobility in superfluid (4)He.
Aitken, Frédéric; Bonifaci, Nelly; von Haeften, Klaus; Eloranta, Jussi
2016-07-28
The Orsay-Trento bosonic density functional theory model is extended to include dissipation due to the viscous response of superfluid (4)He present at finite temperatures. The viscous functional is derived from the Navier-Stokes equation by using the Madelung transformation and includes the contribution of interfacial viscous response present at the gas-liquid boundaries. This contribution was obtained by calibrating the model against the experimentally determined electron mobilities from 1.2 K to 2.1 K along the saturated vapor pressure line, where the viscous response is dominated by thermal rotons. The temperature dependence of ion mobility was calculated for several different solvation cavity sizes and the data are rationalized in the context of roton scattering and Stokes limited mobility models. Results are compared to the experimentally observed "exotic ion" data, which provides estimates for the corresponding bubble sizes in the liquid. Possible sources of such ions are briefly discussed. PMID:27475346
Flavor symmetry based MSSM: Theoretical models and phenomenological analysis
NASA Astrophysics Data System (ADS)
Babu, K. S.; Gogoladze, Ilia; Raza, Shabbar; Shafi, Qaisar
2014-09-01
We present a class of supersymmetric models in which symmetry considerations alone dictate the form of the soft SUSY breaking Lagrangian. We develop a class of minimal models, denoted as sMSSM—for flavor symmetry-based minimal supersymmetric standard model—that respect a grand unified symmetry such as SO(10) and a non-Abelian flavor symmetry H which suppresses SUSY-induced flavor violation. Explicit examples are constructed with the flavor symmetry being gauged SU(2)H and SO(3)H with the three families transforming as 2+1 and 3 representations, respectively. A simple solution is found in the case of SU(2)H for suppressing the flavor violating D-terms based on an exchange symmetry. Explicit models based on SO(3)H without the D-term problem are developed. In addition, models based on discrete non-Abelian flavor groups are presented which are automatically free from D-term issues. The permutation group S3 with a 2+1 family assignment, as well as the tetrahedral group A4 with a 3 assignment are studied. In all cases, a simple solution to the SUSY CP problem is found, based on spontaneous CP violation leading to a complex quark mixing matrix. We develop the phenomenology of the resulting sMSSM, which is controlled by seven soft SUSY breaking parameters for both the 2+1 assignment and the 3 assignment of fermion families. These models are special cases of the phenomenological MSSM (pMSSM), but with symmetry restrictions. We discuss the parameter space of sMSSM compatible with LHC searches, B-physics constraints and dark matter relic abundance. Fine-tuning in these models is relatively mild, since all SUSY particles can have masses below about 3 TeV.
Faster and more accurate graphical model identification of tandem mass spectra using trellises
Wang, Shengjie; Halloran, John T.; Bilmes, Jeff A.; Noble, William S.
2016-01-01
Tandem mass spectrometry (MS/MS) is the dominant high throughput technology for identifying and quantifying proteins in complex biological samples. Analysis of the tens of thousands of fragmentation spectra produced by an MS/MS experiment begins by assigning to each observed spectrum the peptide that is hypothesized to be responsible for generating the spectrum. This assignment is typically done by searching each spectrum against a database of peptides. To our knowledge, all existing MS/MS search engines compute scores individually between a given observed spectrum and each possible candidate peptide from the database. In this work, we use a trellis, a data structure capable of jointly representing a large set of candidate peptides, to avoid redundantly recomputing common sub-computations among different candidates. We show how trellises may be used to significantly speed up existing scoring algorithms, and we theoretically quantify the expected speedup afforded by trellises. Furthermore, we demonstrate that compact trellis representations of whole sets of peptides enables efficient discriminative learning of a dynamic Bayesian network for spectrum identification, leading to greatly improved spectrum identification accuracy. Contact: bilmes@uw.edu or william-noble@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307634
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.
A Simple, Accurate Model for Alkyl Adsorption on Late Transition Metals
Montemore, Matthew M.; Medlin, James W.
2013-01-18
A simple model that predicts the adsorption energy of an arbitrary alkyl in the high-symmetry sites of late transition metal fcc(111) and related surfaces is presented. The model makes predictions based on a few simple attributes of the adsorbate and surface, including the d-shell filling and the matrix coupling element, as well as the adsorption energy of methyl in the top sites. We use the model to screen surfaces for alkyl chain-growth properties and to explain trends in alkyl adsorption strength, site preference, and vibrational softening.
NASA Astrophysics Data System (ADS)
Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu
2011-05-01
Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.
Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Theoretical models for ultrashort electromagnetic pulse propagation in nonlinear metamaterials
Wen, Shuangchun; Xiang, Yuanjiang; Dai, Xiaoyu; Tang, Zhixiang; Su, Wenhua; Fan, Dianyuan
2007-03-15
A metamaterial (MM) differs from an ordinary optical material mainly in that it has a dispersive magnetic permeability and offers greatly enhanced design freedom to alter the linear and nonlinear properties. This makes it possible for us to control the propagation of ultrashort electromagnetic pulses at will. Here we report on generic features of ultrashort electromagnetic pulse propagation and demonstrate the controllability of both the linear and nonlinear parameters of models for pulse propagation in MMs. First, we derive a generalized system of coupled three-dimensional nonlinear Schroedinger equations (NLSEs) suitable for few-cycle pulse propagation in a MM with both nonlinear electric polarization and nonlinear magnetization. The coupled equations recover previous models for pulse propagation in both ordinary material and a MM under the same conditions. Second, by using the coupled NLSEs in the Drude dispersive model as an example, we identify the respective roles of the dispersive electric permittivity and magnetic permeability in ultrashort pulse propagation and disclose some additional features of pulse propagation in MMs. It is shown that, for linear propagation, the sign and magnitude of space-time focusing can be controlled through adjusting the linear dispersive permittivity and permeability. For nonlinear propagation, the linear dispersive permittivity and permeability are incorporated into the nonlinear magnetization and nonlinear polarization, respectively, resulting in controllable magnetic and electric self-steepening effects and higher-order dispersively nonlinear terms in the propagation models.
Photoabsorption spectrum of helium trimer cation—Theoretical modeling
Kalus, René; Karlický, František; Lepetit, Bruno; Paidarová, Ivana; Gadea, Florent Xavier
2013-11-28
The photoabsorption spectrum of He{sub 3}{sup +} is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He{sub 3}{sup +}, for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He{sub 2}{sup +}. A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He{sub 3}{sup +}.
SBS mitigation with 'two-tone' amplification: a theoretical model
NASA Astrophysics Data System (ADS)
Bronder, T. J.; Shay, T. M.; Dajani, I.; Gavrielides, A.; Robin, C. A.; Lu, C. A.
2008-02-01
A new technique for mitigating stimulated Brillouin scattering (SBS) effects in narrow-linewidth Yb-doped fiber amplifiers is demonstrated with a model that reduces to solving an 8×8 system of coupled nonlinear equations with the gain, SBS, and four-wave mixing (FMW) incorporated into the model. This technique uses two seed signals, or 'two-tones', with each tone reaching its SBS threshold almost independently and thus increasing the overall threshold for SBS in the fiber amplifier. The wavelength separation of these signals is also selected to avoid FWM, which in this case possesses the next lowest nonlinear effects threshold. This model predicts an output power increase of 86% (at SBS threshold with no signs of FWM) for a 'two-tone' amplifier with seed signals at 1064nm and 1068nm, compared to a conventional fiber amplifier with a single 1064nm seed. The model is also used to simulate an SBS-suppressing fiber amplifier to test the regime where FWM is the limiting factor. In this case, an optimum wavelength separation of 3nm to 10nm prevents FWM from reaching threshold. The optimum ratio of the input power for the two seed signals in 'two-tone' amplification is also tested. Future experimental verification of this 'two-tone' technique is discussed.
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
[Theoretical model for rocky desertification control in karst area].
Liang, Liang; Liu, Zhi-Xiao; Zhang, Dai-Gui; Deng, Kai-Dong; Zhang, You-Xiang
2007-03-01
Based on the basic principles of restoration ecology, the trigger-action model for rocky desertification control was proposed, i. e. , the ability that an ecosystem enables itself to develop was called dominant force, and the interfering factor resulting in the deviation of the climax of ecological succession from its preconcerted status was called trigger factor. The ultimate status of ecological succession was determined by the interaction of dominant force and trigger factor. Rocky desertification was the result of serious malignant triggers, and its control was the process of benign triggers in using the ecological restoration method of artificial designs to activate the natural designing ability of an ecosystem. The ecosystem of Karst rocky desertification in Fenghuang County with restoration measures was taken as a case to test the model, and the results showed that the restoration measures based on trigger-action model markedly improved the physical and chemical properties of soil and increased the diversity of plant. There was a benign trigger between the restoration measures and the Karst area. The rationality of the trigger-action model was primarily tested by the results in practice. PMID:17552199
Testing Theoretical Models of Magnetic Damping Using an Air Track
ERIC Educational Resources Information Center
Vidaurre, Ana; Riera, Jaime; Monsoriu, Juan A.; Gimenez, Marcos H.
2008-01-01
Magnetic braking is a long-established application of Lenz's law. A rigorous analysis of the laws governing this problem involves solving Maxwell's equations in a time-dependent situation. Approximate models have been developed to describe different experimental results related to this phenomenon. In this paper we present a new method for the…
Control theoretic model of automobile demand and gasoline consumption
Panerali, R.B.
1982-01-01
The purpose of this research is to examine the controllability of gasoline consumption and automobile demand using gasoline price as a policy instrument. The author examines the problem of replacing the standby motor-fuel rationing plan with use of the federal excise tax on gasoline. It is demonstrated that the standby targets are attainable with the tax. The problem of multiple control of automobile demand and gasoline consumption is also addressed. When the federal gasoline excise tax is used to control gasoline consumption, the policy maker can also use the tax to direct automobile demand. There exists a trade-off between various automobile demand targets and the target implied for gasoline consumption. We seek to measure this trade-off and use the results for planning. This research employs a time series of cross section data base with a disaggregated model of automobile demand, and an aggregate model of gasoline consumption. Automobile demand is divided into five mutually exclusive classes of cars. Gasoline demand is model as the sum of regular, premium, and unleaded gasoline. The pooled data base is comprised of a quarterly time series running from 1963 quarter one through 1979 quarter four, for each of the 48 continuous states. The demand equations are modelled using dynamic theories of demand. Estimates of the respective equations are made with error components and covariance techniques. Optimal control is applied to examine the gasoline-control problem.
A Theoretical Model of Sexual Assault: An Empirical Test.
ERIC Educational Resources Information Center
White, Jacquelyn W.; Humphrey, John A.
Koss and Dinero's (1987) comprehensive developmental model of sexual aggression asserts that sexual assault is in part a result of early sexual experiences and family violence; that sexually aggressive behaviors may be predicted by such "releaser" variables as current sexual behavior, alcohol use, and peer group support; and that use of aggression…
Interpreting Unfamiliar Graphs: A Generative, Activity Theoretic Model
ERIC Educational Resources Information Center
Roth, Wolff-Michael; Lee, Yew Jin
2004-01-01
Research on graphing presents its results as if knowing and understanding were something stored in peoples' minds independent of the situation that they find themselves in. Thus, there are no models that situate interview responses to graphing tasks. How, then, we question, are the interview texts produced? How do respondents begin and end…
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Palmer, David S; Sergiievskyi, Volodymyr P; Jensen, Frank; Fedorov, Maxim V
2010-07-28
We report on the results of testing the reference interaction site model (RISM) for the estimation of the hydration free energy of druglike molecules. The optimum model was selected after testing of different RISM free energy expressions combined with different quantum mechanics and empirical force-field methods of structure optimization and atomic partial charge calculation. The final model gave a systematic error with a standard deviation of 2.6 kcal/mol for a test set of 31 molecules selected from the SAMPL1 blind challenge set [J. P. Guthrie, J. Phys. Chem. B 113, 4501 (2009)]. After parametrization of this model to include terms for the excluded volume and the number of atoms of different types in the molecule, the root mean squared error for a test set of 19 molecules was less than 1.2 kcal/mol.
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel
2010-01-18
We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices.
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-04-11
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna
2015-01-01
Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon
Theoretical model for morphogenesis and cell sorting in Dictyostelium discoideum
NASA Astrophysics Data System (ADS)
Umeda, T.; Inouye, K.
1999-02-01
The morphogenetic movement and cell sorting in cell aggregates from the mound stage to the migrating slug stage of the cellular slime mold Dictyostelium discoideum were studied using a mathematical model. The model postulates that the motive force generated by the cells is in equilibrium with the internal pressure and mechanical resistance. The moving boundary problem derived from the force balance equation and the continuity equation has stationary solutions in which the aggregate takes the shape of a spheroid (or an ellipse in two-dimensional space) with the pacemaker at one of its foci, moving at a constant speed. Numerical calculations in two-dimensional space showed that an irregularly shaped aggregate changes its shape to become an ellipse as it moves. Cell aggregates consisting of two cell types differing in motive force exhibit cell sorting and become elongated, suggesting the importance of prestalk/prespore differentiation in the morphogenesis of Dictyostelium.
Modeling energetic and theoretical costs of thermoregulatory strategy.
Alford, John G; Lutterschmidt, William I
2012-01-01
Poikilothermic ectotherms have evolved behaviours that help them maintain or regulate their body temperature (T (b)) around a preferred or 'set point' temperature (T (set)). Thermoregulatory behaviors may range from body positioning to optimize heat gain to shuttling among preferred microhabitats to find appropriate environmental temperatures. We have modelled movement patterns between an active and non-active shuttling behaviour within a habitat (as a biased random walk) to investigate the potential cost of two thermoregulatory strategies. Generally, small-bodied ectotherms actively thermoregulate while large-bodied ectotherms may passively thermoconform to their environment. We were interested in the potential energetic cost for a large-bodied ectotherm if it were forced to actively thermoregulate rather than thermoconform. We therefore modelled movements and the resulting and comparative energetic costs in precisely maintaining a T (set) for a small-bodied versus large-bodied ectotherm to study and evaluate the thermoregulatory strategy.
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
Hewitt, Nicola J; Edwards, Robert J; Fritsche, Ellen; Goebel, Carsten; Aeby, Pierre; Scheel, Julia; Reisinger, Kerstin; Ouédraogo, Gladys; Duche, Daniel; Eilstein, Joan; Latil, Alain; Kenny, Julia; Moore, Claire; Kuehnl, Jochen; Barroso, Joao; Fautz, Rolf; Pfuhler, Stefan
2013-06-01
Several human skin models employing primary cells and immortalized cell lines used as monocultures or combined to produce reconstituted 3D skin constructs have been developed. Furthermore, these models have been included in European genotoxicity and sensitization/irritation assay validation projects. In order to help interpret data, Cosmetics Europe (formerly COLIPA) facilitated research projects that measured a variety of defined phase I and II enzyme activities and created a complete proteomic profile of xenobiotic metabolizing enzymes (XMEs) in native human skin and compared them with data obtained from a number of in vitro models of human skin. Here, we have summarized our findings on the current knowledge of the metabolic capacity of native human skin and in vitro models and made an overall assessment of the metabolic capacity from gene expression, proteomic expression, and substrate metabolism data. The known low expression and function of phase I enzymes in native whole skin were reflected in the in vitro models. Some XMEs in whole skin were not detected in in vitro models and vice versa, and some major hepatic XMEs such as cytochrome P450-monooxygenases were absent or measured only at very low levels in the skin. Conversely, despite varying mRNA and protein levels of phase II enzymes, functional activity of glutathione S-transferases, N-acetyltransferase 1, and UDP-glucuronosyltransferases were all readily measurable in whole skin and in vitro skin models at activity levels similar to those measured in the liver. These projects have enabled a better understanding of the contribution of XMEs to toxicity endpoints. PMID:23539547
BL Herculis stars - Theoretical models for field variables
NASA Technical Reports Server (NTRS)
Carson, R.; Stothers, R.
1982-01-01
Type II Cepheids with periods between 1 and 3 days, commonly designated as Bl Herculis stars, have been modeled here with the aim of interpreting the wide variety of light curves observed among the field variables. Previously modeled globular cluster members are used as standard calibration objects. The major finding is that only a small range of luminosities is capable of generating a large variety of light curve types at a given period. For a mass of approximately 0.60 solar mass, the models are able to reproduce the observed mean luminosities, dispersion of mean luminosities, periods, light amplitudes, light asymmetries, and phases of secondary features in the light curves of known BL Her stars. It is possible that the metal-rich variables (which are found only in the field) have luminosities lower than those of most metal-poor variables. The present revised mass for BL Her, a metal-rich object, is not significantly different from the mean mass of the metal-poor variables.
A dynamic game-theoretic model of parental care.
Mcnamara, J M; Székely, T; Webb, J N; Houston, A I
2000-08-21
We present a model in which members of a mated pair decide whether to care for their offspring or desert them. There is a breeding season of finite length during which it is possible to produce and raise several batches of offspring. On deserting its offspring, an individual can search for a new mate. The probability of finding a mate depends on the number of individuals of each sex that are searching, which in turn depends upon the previous care and desertion decisions of all population members. We find the evolutionarily stable pattern of care over the breeding season. The feedback between behaviour and mating opportunity can result in a pattern of stable oscillations between different forms of care over the breeding season. Oscillations can also arise because the best thing for an individual to do at a particular time in the season depends on future behaviour of all population members. In the baseline model, a pair splits up after a breeding attempt, even if they both care for the offspring. In a version of the model in which a pair stays together if they both care, the feedback between behaviour and mating opportunity can lead to more than one evolutionarily stable form of care.
Mathematical, Theoretical and Phenomenological Challenges Beyond the Standard Model
NASA Astrophysics Data System (ADS)
Djordjević, G.; Nešić, L.; Wess, Julius
2005-03-01
Integrable structures in the gauge/string corespondence -- Fluxes in M-theory on 7-manifolds: Gz-, SW(3)- and SU( 2)-structures -- Noncommutative quantum field theory: review and its latest achievements -- Shadows of quantum black holes -- Yukawa quasi-unification and inflation -- Supersymmetric grand unification: the quest for the theory -- Spin foam models of quantum gravity -- Riemann-cartan space-time in stringy geometry -- Can black hole relax unitarily? -- Deformed coordinate spaces derivatives.Deformed coherent state solution to multiparticle stochastic processes -- Non-commutative GUTS, standard model and C, P, T properties from seiberg-witten map -- Seesaw, susy and SO(10) -- On the dynamics of BMN operators of finite size and the , model of string bits -- Divergencies in &expanded noncommutative SU( 2) yang-mills theory -- Heterotic string compactifications with fluxes -- Symmetries and supersymmetries of the dirac-type operators on euclidean taub-NUT space -- Real and p-Adic aspects of quantization of tachyons -- Skew-symmetric lax polynomial matrices and integrable rigid body systems -- Supersymmetric quantum field theories -- Parastatistics algebras and combinatorics -- Noncommutative D-branes on group manifolds -- High-energy bounds on the scattering amplitude in noncommutative quantum field theory -- Many faces of D-branes: from flat space, via AdS to pp-waves.
A dynamic game-theoretic model of parental care.
Mcnamara, J M; Székely, T; Webb, J N; Houston, A I
2000-08-21
We present a model in which members of a mated pair decide whether to care for their offspring or desert them. There is a breeding season of finite length during which it is possible to produce and raise several batches of offspring. On deserting its offspring, an individual can search for a new mate. The probability of finding a mate depends on the number of individuals of each sex that are searching, which in turn depends upon the previous care and desertion decisions of all population members. We find the evolutionarily stable pattern of care over the breeding season. The feedback between behaviour and mating opportunity can result in a pattern of stable oscillations between different forms of care over the breeding season. Oscillations can also arise because the best thing for an individual to do at a particular time in the season depends on future behaviour of all population members. In the baseline model, a pair splits up after a breeding attempt, even if they both care for the offspring. In a version of the model in which a pair stays together if they both care, the feedback between behaviour and mating opportunity can lead to more than one evolutionarily stable form of care. PMID:10931755
Modeling postpartum depression in rats: theoretic and methodological issues
Ming, LI; Shinn-Yi, CHOU
2016-01-01
The postpartum period is when a host of changes occur at molecular, cellular, physiological and behavioral levels to prepare female humans for the challenge of maternity. Alteration or prevention of these normal adaptions is thought to contribute to disruptions of emotion regulation, motivation and cognitive abilities that underlie postpartum mental disorders, such as postpartum depression. Despite the high incidence of this disorder, and the detrimental consequences for both mother and child, its etiology and related neurobiological mechanisms remain poorly understood, partially due to the lack of appropriate animal models. In recent decades, there have been a number of attempts to model postpartum depression disorder in rats. In the present review, we first describe clinical symptoms of postpartum depression and discuss known risk factors, including both genetic and environmental factors. Thereafter, we discuss various rat models that have been developed to capture various aspects of this disorder and knowledge gained from such attempts. In doing so, we focus on the theories behind each attempt and the methods used to achieve their goals. Finally, we point out several understudied areas in this field and make suggestions for future directions. PMID:27469254
Modeling postpartum depression in rats: theoretic and methodological issues.
Li, Ming; Chou, Shinn-Yi
2016-07-18
The postpartum period is when a host of changes occur at molecular, cellular, physiological and behavioral levels to prepare female humans for the challenge of maternity. Alteration or prevention of these normal adaptions is thought to contribute to disruptions of emotion regulation, motivation and cognitive abilities that underlie postpartum mental disorders, such as postpartum depression. Despite the high incidence of this disorder, and the detrimental consequences for both mother and child, its etiology and related neurobiological mechanisms remain poorly understood, partially due to the lack of appropriate animal models. In recent decades, there have been a number of attempts to model postpartum depression disorder in rats. In the present review, we first describe clinical symptoms of postpartum depression and discuss known risk factors, including both genetic and environmental factors. Thereafter, we discuss various rat models that have been developed to capture various aspects of this disorder and knowledge gained from such attempts. In doing so, we focus on the theories behind each attempt and the methods used to achieve their goals. Finally, we point out several understudied areas in this field and make suggestions for future directions. PMID:27469254
A Theoretical Model for the Associative Nature of Conference Participation
Smiljanić, Jelena; Chatterjee, Arnab; Kauppinen, Tomi; Mitrović Dankulov, Marija
2016-01-01
Participation in conferences is an important part of every scientific career. Conferences provide an opportunity for a fast dissemination of latest results, discussion and exchange of ideas, and broadening of scientists’ collaboration network. The decision to participate in a conference depends on several factors like the location, cost, popularity of keynote speakers, and the scientist’s association with the community. Here we discuss and formulate the problem of discovering how a scientist’s previous participation affects her/his future participations in the same conference series. We develop a stochastic model to examine scientists’ participation patterns in conferences and compare our model with data from six conferences across various scientific fields and communities. Our model shows that the probability for a scientist to participate in a given conference series strongly depends on the balance between the number of participations and non-participations during his/her early connections with the community. An active participation in a conference series strengthens the scientist’s association with that particular conference community and thus increases the probability of future participations. PMID:26859404
NASA Technical Reports Server (NTRS)
Raj, S. V.
2011-01-01
Establishing the geometry of foam cells is useful in developing microstructure-based acoustic and structural models. Since experimental data on the geometry of the foam cells are limited, most modeling efforts use an idealized three-dimensional, space-filling Kelvin tetrakaidecahedron. The validity of this assumption is investigated in the present paper. Several FeCrAlY foams with relative densities varying between 3 and 15 percent and cells per mm (c.p.mm.) varying between 0.2 and 3.9 c.p.mm. were microstructurally evaluated. The number of edges per face for each foam specimen was counted by approximating the cell faces by regular polygons, where the number of cell faces measured varied between 207 and 745. The present observations revealed that 50 to 57 percent of the cell faces were pentagonal while 24 to 28 percent were quadrilateral and 15 to 22 percent were hexagonal. The present measurements are shown to be in excellent agreement with literature data. It is demonstrated that the Kelvin model, as well as other proposed theoretical models, cannot accurately describe the FeCrAlY foam cell structure. Instead, it is suggested that the ideal foam cell geometry consists of 11 faces with three quadrilateral, six pentagonal faces and two hexagonal faces consistent with the 3-6-2 Matzke cell. A compilation of 90 years of experimental data reveals that the average number of cell faces decreases linearly with the increasing ratio of quadrilateral to pentagonal faces. It is concluded that the Kelvin model is not supported by these experimental data.
Geo-accurate model extraction from three-dimensional image-derived point clouds
NASA Astrophysics Data System (ADS)
Nilosek, David; Sun, Shaohui; Salvaggio, Carl
2012-06-01
A methodology is proposed for automatically extracting primitive models of buildings in a scene from a three-dimensional point cloud derived from multi-view depth extraction techniques. By exploring the information provided by the two-dimensional images and the three-dimensional point cloud and the relationship between the two, automated methods for extraction are presented. Using the inertial measurement unit (IMU) and global positioning system (GPS) data that accompanies the aerial imagery, the geometry is derived in a world-coordinate system so the model can be used with GIS software. This work uses imagery collected by the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory's WASP sensor platform. The data used was collected over downtown Rochester, New York. Multiple target buildings have their primitive three-dimensional model geometry extracted using modern point-cloud processing techniques.
Vavalle, Nicholas A; Moreno, Daniel P; Rhyne, Ashley C; Stitzel, Joel D; Gayzik, F Scott
2013-03-01
This study presents four validation cases of a mid-sized male (M50) full human body finite element model-two lateral sled tests at 6.7 m/s, one sled test at 8.9 m/s, and a lateral drop test. Model results were compared to transient force curves, peak force, chest compression, and number of fractures from the studies. For one of the 6.7 m/s impacts (flat wall impact), the peak thoracic, abdominal and pelvic loads were 8.7, 3.1 and 14.9 kN for the model and 5.2 ± 1.1 kN, 3.1 ± 1.1 kN, and 6.3 ± 2.3 kN for the tests. For the same test setup in the 8.9 m/s case, they were 12.6, 6, and 21.9 kN for the model and 9.1 ± 1.5 kN, 4.9 ± 1.1 kN, and 17.4 ± 6.8 kN for the experiments. The combined torso load and the pelvis load simulated in a second rigid wall impact at 6.7 m/s were 11.4 and 15.6 kN, respectively, compared to 8.5 ± 0.2 kN and 8.3 ± 1.8 kN experimentally. The peak thorax load in the drop test was 6.7 kN for the model, within the range in the cadavers, 5.8-7.4 kN. When analyzing rib fractures, the model predicted Abbreviated Injury Scale scores within the reported range in three of four cases. Objective comparison methods were used to quantitatively compare the model results to the literature studies. The results show a good match in the thorax and abdomen regions while the pelvis results over predicted the reaction loads from the literature studies. These results are an important milestone in the development and validation of this globally developed average male FEA model in lateral impact.
Theoretical models for the emergence of biomolecular homochirality
NASA Astrophysics Data System (ADS)
Walker, Sara Imari
Little is known about the emergence of life from nonliving precursors. A key missing-piece is the origin of homochirality: nearly all life is characterized by exclusively dextrorotary sugars and levorotary amino acids. The research presented in this thesis addresses the challenge of uncovering mechanisms for chiral symmetry breaking in a prebiotic environment and implications for the origin of life on Earth. Expanding on a well-known model for chiral selection through polymerization, and modeling the spatiotemporal dynamics starting from near-racemic initial conditions, it is demonstrated that the net chirality of molecular building blocks grows with the longest polymer in the reaction network (of length N) with critical behavior for the onset of chiral asymmetry determined by the value of N. This surprising result indicates that significant chiral asymmetry occurs only for systems which permit growth of long polymers. Expanding on this work, the effects of environmental disturbances on the evolution of chirality in prebiotic reaction-diffusion networks are studied via the implementation of a stochastic spatiotemporal Langevin equation. The results show that environmental interactions can have significant impact on the evolution of prebiotic chirality: the history of prebiotic chirality is therefore interwoven with the Earths early environmental history in a mechanism we call punctuated chirality. This result establishes that the onset of homochirality is not an isolated phenomenon: chiral selection must occur in tandem with the transition from chemistry to biology, otherwise the prebiotic soup is unstable to environmental events. Addressing the challenge of understanding the role of chirality in the transition from non-life to life, the diffusive slowdown of reaction networks induced, for example, through tidal cycles or evaporating pools, is modeled. The results of this study demonstrate that such diffusive slowdown leads to the stabilization of homochiral
GSTARS computer models and their applications, part I: theoretical development
Yang, C.T.; Simoes, F.J.M.
2008-01-01
GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
ERIC Educational Resources Information Center
Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.
2012-01-01
The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
Theoretical model of a piezoelectric composite spinal fusion interbody implant.
Tobaben, Nicholas E; Domann, John P; Arnold, Paul M; Friis, Elizabeth A
2014-04-01
Failure rates of spinal fusion are high in smokers and diabetics. The authors are investigating the development of a piezoelectric composite biomaterial and interbody device design that could generate clinically relevant levels of electrical stimulation to help improve the rate of fusion for these patients. A lumped parameter model of the piezoelectric composite implant was developed based on a model that has been utilized to successfully predict power generation for piezoceramics. Seven variables (fiber material, matrix material, fiber volume fraction, fiber aspect ratio, implant cross-sectional area, implant thickness, and electrical load resistance) were parametrically analyzed to determine their effects on power generation within reasonable implant constraints. Influences of implant geometry and fiber aspect ratio were independent of material parameters. For a cyclic force of constant magnitude, implant thickness was directly and cross-sectional area inversely proportional to power generation potential. Fiber aspect ratios above 30 yielded maximum power generation potential while volume fractions above 15% showed superior performance. This investigation demonstrates the feasibility of using composite piezoelectric biomaterials in medical implants to generate therapeutic levels of direct current electrical stimulation. The piezoelectric spinal fusion interbody implant shows promise for helping increase success rates of spinal fusion.
Theoretical Modeling of Various Spectroscopies for Cuprates and Topological Insulators
NASA Astrophysics Data System (ADS)
Basak, Susmita
Spectroscopies resolved highly in momentum, energy and/or spatial dimensions are playing an important role in unraveling key properties of wide classes of novel materials. However, spectroscopies do not usually provide a direct map of the underlying electronic spectrum, but act as a complex 'filter' to produce a 'mapping' of the underlying energy levels, Fermi surfaces (FSs) and excitation spectra. The connection between the electronic spectrum and the measured spectra is described as a generalized 'matrix element effect'. The nature of the matrix element involved differs greatly between different spectroscopies. For example, in angle-resolved photoemission (ARPES) an incoming photon knocks out an electron from the sample and the energy and momentum of the photoemitted electron is measured. This is quite different from what happens in K-edge resonant inelastic X-ray scattering (RIXS), where an X-ray photon is scattered after inducing electronic transitions near the Fermi energy through an indirect second order process, or in Compton scattering where the incident X-ray photon is scattered inelastically from an electron transferring energy and momentum to the scattering electron. For any given spectroscopy, the matrix element is, in general, a complex function of the phase space of the experiment, e.g. energy/polarization of the incoming photon and the energy/momentum/spin of the photoemitted electron in the case of ARPES. The matrix element can enhance or suppress signals from specific states, or merge signals of groups of states, making a good understanding of the matrix element effects important for not only a robust interpretation of the spectra, but also for ascertaining optimal regions of the experimental phase space for zooming in on states of the greatest interest. In this thesis I discuss a comprehensive scheme for modeling various highly resolved spectroscopies of the cuprates and topological insulators (TIs) where effects of matrix element, crystal
Polarimetric signatures of sea ice. 1: Theoretical model
NASA Technical Reports Server (NTRS)
Nghiem, S. V.; Kwok, R.; Yueh, S. H.; Drinkwater, M. R.
1995-01-01
Physical, structral, and electromagnetic properties and interrelating processes in sea ice are used to develop a composite model for polarimetric backscattering signatures of sea ice. Physical properties of sea ice constituents such as ice, brine, air, and salt are presented in terms of their effects on electromagnetic wave interactions. Sea ice structure and geometry of scatterers are related to wave propagation, attenuation, and scattering. Temperature and salinity, which are determining factors for the thermodynamic phase distribution in sea ice, are consistently used to derive both effective permittivities and polarimetric scattering coefficients. Polarmetric signatures of sea ice depend on crystal sizes and brine volumes, which are affected by ice growth rates. Desalination by brine expulsion, drainage, or other mechanisms modifies wave penetration and scattering. Sea ice signatures are further complicated by surface conditions such as rough interfaces, hummocks, snow cover, brine skim, or slush layer. Based on the same set of geophysical parameters characterizing sea ice, a composite model is developed to calculate effective permittivities and backscattering covariance matrices at microwave frequencies to interpretation of sea ice polarimetric signatures.
Polarimetric Signatures of Sea Ice. Part 1; Theoretical Model
NASA Technical Reports Server (NTRS)
Nghiem, S. V.; Kwok, R.; Yueh, S. H.; Drinkwater, M. R.
1995-01-01
Physical, structural, and electromagnetic properties and interrelating processes in sea ice are used to develop a composite model for polarimetric backscattering signatures of sea ice. Physical properties of sea ice constituents such as ice, brine, air, and salt are presented in terms of their effects on electromagnetic wave interactions. Sea ice structure and geometry of scatterers are related to wave propagation, attenuation, and scattering. Temperature and salinity, which are determining factors for the thermodynamic phase distribution in sea ice, are consistently used to derive both effective permittivities and polarimetric scattering coefficients. Polarimetric signatures of sea ice depend on crystal sizes and brine volumes, which are affected by ice growth rates. Desalination by brine expulsion, drainage, or other mechanisms modifies wave penetration and scattering. Sea ice signatures are further complicated by surface conditions such as rough interfaces, hummocks, snow cover, brine skim, or slush layer. Based on the same set of geophysical parameters characterizing sea ice, a composite model is developed to calculate effective permittivities and backscattering covariance matrices at microwave frequencies for interpretation of sea ice polarimetric signatures.
Sapsis, Themistoklis P.; Majda, Andrew J.
2013-01-01
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-01-01
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Theoretical model for assessing properties of local structures in metalloprotein
NASA Astrophysics Data System (ADS)
Koyimatu, M.; Shimahara, H.; Iwayama, M.; Sugimori, K.; Kawaguchi, K.; Saito, H.; Nagao, H.
2013-02-01
For model structures containing two aromatic rings such as the indole of Trp5 and the imidazole of His64 in human carbonic anhydrase (hCAII), the location and orientation of the rings with regard to each other contribute to the magnitude of the entire interaction energy. Here the energetic contribution of the indole ring of Trp5 on the imidazole ring of the "out" conformation of His64 were calculated to compare with that of the alternative "in" conformation of His64 by using the MP2/6-311++G(d,p)//B3LYP/6-31G(d,p) method. We suggest that 1) Trp5 and the "out" conformation of His64 are predicted to form a stack of planar parallel rings via π-stacking interaction and 2) the energy is 1.73-1.83 kcal/mol to stabilize the "out" conformation, compared with the "in" conformation.
[A theoretical model of the transition phase in human locomotion].
Beuter, A; Lefebvre, R
1988-12-01
In this study we examine the bifurcation of the transition between walking and running. Beuter and Lalonde (1986) have conjectured that the pertinent parameters separating walking and running can be described by a cusp singularity (Thom, 1972). In this model, the unidimensional state space is characterized by support duration and the bidimensional parameter space is characterized by the subject's weight and speed. To test this model eight males walked and ran on a motor driven treadmill at an increasing or decreasing speed with or without additional loads corresponding to 0%, 7% and 14% of their body weight. Velocities corresponding to transitions between the two modes of locomotion indicate that on the average the walk-run transition occurs at higher speed than the run-walk transition illustrating an hysteresis effect. In addition, the average difference between the transitions decreases as the load increases [mean 0 = 0.235 m/s, +/- 0.09 m/s, mean 7 = 0.104 m/s, +/- 0.07 m/s and mean 14 = 0.041 m/s, +/- 0.06 m/s] corresponding to an F ratio of F = 2.72, 0.05 less than p less than 0.1. A comparison of the differences in transition velocity at 0% and 14% is statistically different (t = 2.8, p less than 0.025). These results tend to support the existence of an elementary cusp singularity separating the two locomotion modes and suggest that the mechanisms controlling these transitions can be described by a hysterisis cycle and a small number of parameters. PMID:3219673
Fast and accurate low-dimensional reduction of biophysically detailed neuron models.
Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele
2012-01-01
Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions. PMID:23226594
An accurate in vitro model of the E. coli envelope.
Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H
2015-10-01
Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir-Blodgett and Langmuir-Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292
Fast and accurate low-dimensional reduction of biophysically detailed neuron models.
Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele
2012-01-01
Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions.
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Horner, Marc; Muralikrishnan, R.
2010-01-01
ABSTRACT Purpose A computational fluid dynamics (CFD) study examined the impact of particle size on dissolution rate and residence of intravitreal suspension depots of Triamcinolone Acetonide (TAC). Methods A model for the rabbit eye was constructed using insights from high-resolution NMR imaging studies (Sawada 2002). The current model was compared to other published simulations in its ability to predict clearance of various intravitreally injected materials. Suspension depots were constructed explicitly rendering individual particles in various configurations: 4 or 16 mg drug confined to a 100 μL spherical depot, or 4 mg exploded to fill the entire vitreous. Particle size was reduced systematically in each configuration. The convective diffusion/dissolution process was simulated using a multiphase model. Results Release rate became independent of particle diameter below a certain value. The size-independent limits occurred for particle diameters ranging from 77 to 428 μM depending upon the depot configuration. Residence time predicted for the spherical depots in the size-independent limit was comparable to that observed in vivo. Conclusions Since the size-independent limit was several-fold greater than the particle size of commercially available pharmaceutical TAC suspensions, differences in particle size amongst such products are predicted to be immaterial to their duration or performance. PMID:20467888
Mathematical model accurately predicts protein release from an affinity-based delivery system.
Vulic, Katarina; Pakulska, Malgosia M; Sonthalia, Rohit; Ramachandran, Arun; Shoichet, Molly S
2015-01-10
Affinity-based controlled release modulates the delivery of protein or small molecule therapeutics through transient dissociation/association. To understand which parameters can be used to tune release, we used a mathematical model based on simple binding kinetics. A comprehensive asymptotic analysis revealed three characteristic regimes for therapeutic release from affinity-based systems. These regimes can be controlled by diffusion or unbinding kinetics, and can exhibit release over either a single stage or two stages. This analysis fundamentally changes the way we think of controlling release from affinity-based systems and thereby explains some of the discrepancies in the literature on which parameters influence affinity-based release. The rate of protein release from affinity-based systems is determined by the balance of diffusion of the therapeutic agent through the hydrogel and the dissociation kinetics of the affinity pair. Equations for tuning protein release rate by altering the strength (KD) of the affinity interaction, the concentration of binding ligand in the system, the rate of dissociation (koff) of the complex, and the hydrogel size and geometry, are provided. We validated our model by collapsing the model simulations and the experimental data from a recently described affinity release system, to a single master curve. Importantly, this mathematical analysis can be applied to any single species affinity-based system to determine the parameters required for a desired release profile. PMID:25449806
Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics
NASA Astrophysics Data System (ADS)
Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.
2014-12-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.
NASA Astrophysics Data System (ADS)
Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua
2015-05-01
Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.
Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum
NASA Astrophysics Data System (ADS)
Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.
2013-02-01
Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.
Theoretical conditions for the stationary reproduction of model protocells.
Mavelli, Fabio; Ruiz-Mirazo, Kepa
2013-02-01
In previous works we have explored the dynamics of chemically reacting proto-cellular systems, under different experimental conditions and kinetic parameters, by means of our stochastic simulation platform 'ENVIRONMENT'. In this paper we, somehow, turn the question around: accepting some broad modeling assumptions, we investigate the conditions under which simple protocells will spontaneously settle into a stationary reproducing regime, characterized by a regular growth/division cycle and the maintenance of a certain standard size and chemical composition across generations. In the first part, starting from purely geometric considerations, the condition for stationary reproduction of a protocell will be expressed in terms of a growth control coefficient (γ). Then, an explicit relationship, the osmotic synchronization condition, will be analytically derived under a set of kinetic simplifications and taking into account the osmotic pressure balance operating across the protocell membrane. In the second part of the paper, this general condition that constrains different molecular/kinetic parameters and features of the system (reaction rates, permeability coefficients, metabolite concentrations, system volume) will be applied to different cases of self-producing vesicles, predicting the stationary protocell size or lifetime. Finally, in order to test the validity of our analytic results and predictions, the case study is contrasted with data obtained through both stochastic and deterministic computational algorithms. PMID:23233152
TURBULENT CONVECTION MODEL IN THE OVERSHOOTING REGION. II. THEORETICAL ANALYSIS
Zhang, Q. S.; Li, Y. E-mail: ly@ynao.ac.cn
2012-05-01
Turbulent convection models (TCMs) are thought to be good tools to deal with the convective overshooting in the stellar interior. However, they are too complex to be applied to calculations of stellar structure and evolution. In order to understand the physical processes of the convective overshooting and to simplify the application of TCMs, a semi-analytic solution is necessary. We obtain the approximate solution and asymptotic solution of the TCM in the overshooting region, and find some important properties of the convective overshooting. (1) The overshooting region can be partitioned into three parts: a thin region just outside the convective boundary with high efficiency of turbulent heat transfer, a power-law dissipation region of turbulent kinetic energy in the middle, and a thermal dissipation area with rapidly decreasing turbulent kinetic energy. The decaying indices of the turbulent correlations k, u{sub r}'T'-bar, and T'T'-bar are only determined by the parameters of the TCM, and there is an equilibrium value of the anisotropic degree {omega}. (2) The overshooting length of the turbulent heat flux u{sub r}'T'-bar is about 1H{sub k} (H{sub k} = |dr/dln k|). (3) The value of the turbulent kinetic energy at the convective boundary k{sub C} can be estimated by a method called the maximum of diffusion. Turbulent correlations in the overshooting region can be estimated by using k{sub C} and exponentially decreasing functions with the decaying indices.
A Measurement-Theoretic Analysis of the Fuzzy Logic Model of Perception.
ERIC Educational Resources Information Center
Crowther, Court S.; And Others
1995-01-01
The fuzzy logic model of perception (FLMP) is analyzed from a measurement-theoretic perspective. The choice rule of FLMP is shown to be equivalent to a version of the Rasch model. In fact, FLMP can be reparameterized as a simple two-category logit model. (SLD)
Development of a Godunov-type model for the accurate simulation of dispersion dominated waves
NASA Astrophysics Data System (ADS)
Bradford, Scott F.
2016-10-01
A new numerical model based on the Navier-Stokes equations is presented for the simulation of dispersion dominated waves. The equations are solved by splitting the pressure into hydrostatic and non-hydrostatic components. The Godunov approach is utilized to solve the hydrostatic flow equations and the resulting velocity field is then corrected to be divergence free. Alternative techniques for the time integration of the non-hydrostatic pressure gradients are presented and investigated in order to improve the accuracy of dispersion dominated wave simulations. Numerical predictions are compared with analytical solutions and experimental data for test cases involving standing, shoaling, refracting, and breaking waves.
Considering mask pellicle effect for more accurate OPC model at 45nm technology node
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2008-11-01
Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee
2012-01-01
Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318
NASA Astrophysics Data System (ADS)
Thobel, J. L.; Baudry, L.; Dessenne, F.; Charef, M.; Fauquembergue, R.
1993-01-01
A theoretical investigation of the impurity scattering limited mobility in quantum wells is presented. Emphasis is put on the influence of wave-function modeling, since the literature about this topic is contradictory. For an infinite square well, Dirac and sine wave functions yield the same evolutions of the mobility with temperature, carrier density, and well width. These results contradict those published by Lee [J. Appl. Phys. 54, 6995 (1983)], which are shown to be wrong. Self-consistent wave functions have also been used to compute the mobility in finite barrier height quantum wells. A strong influence of the presence of electrons inside the doped barrier has been demonstrated. It is suggested that, although simple models are useful for qualitative discussions, accurate evaluation of mobility requires a reasonably realistic description of wave functions.
Hatcher, Elizabeth; Ishikita, Hiroshi; Skone, Jonathan H.; Soudackov, Alexander V.
2010-01-01
Theoretical studies of proton-coupled electron transfer (PCET) reactions for model systems provide insight into fundamental concepts relevant to bioenergetics. A dynamical theoretical formulation for vibronically nonadiabatic PCET reactions has been developed. This theory enables the calculation of rates and kinetic isotope effects, as well as the pH and temperature dependences, of PCET reactions. Methods for calculating the vibronic couplings for PCET systems have also been developed and implemented. These theoretical approaches have been applied to a wide range of PCET reactions, including tyrosyl radical generation in a tyrosine-bound rhenium polypyridyl complex, phenoxyl/phenol and benzyl/toluene self-exchange reactions, and hydrogen abstraction catalyzed by the enzyme lipoxygenase. These applications have elucidated some of the key underlying physical principles of PCET reactions. The tools and concepts derived from these theoretical studies provide the foundation for future theoretical studies of PCET in more complex bioenergetic systems such as Photosystem II. PMID:21057592
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
NASA Astrophysics Data System (ADS)
Chien Chang, Jia-Ren; Tai, Cheng-Chi
2006-07-01
This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467
Dorn, Jonas F.; Zhang, Li; Phi, Tan-Trao; Lacroix, Benjamin; Maddox, Paul S.; Liu, Jian; Maddox, Amy Shaub
2016-01-01
During cytokinesis, the cell undergoes a dramatic shape change as it divides into two daughter cells. Cell shape changes in cytokinesis are driven by a cortical ring rich in actin filaments and nonmuscle myosin II. The ring closes via actomyosin contraction coupled with actin depolymerization. Of interest, ring closure and hence the furrow ingression are nonconcentric (asymmetric) within the division plane across Metazoa. This nonconcentricity can occur and persist even without preexisting asymmetric cues, such as spindle placement or cellular adhesions. Cell-autonomous asymmetry is not explained by current models. We combined quantitative high-resolution live-cell microscopy with theoretical modeling to explore the mechanistic basis for asymmetric cytokinesis in the Caenorhabditis elegans zygote, with the goal of uncovering basic principles of ring closure. Our theoretical model suggests that feedback among membrane curvature, cytoskeletal alignment, and contractility is responsible for asymmetric cytokinetic furrowing. It also accurately predicts experimental perturbations of conserved ring proteins. The model further suggests that curvature-mediated filament alignment speeds up furrow closure while promoting energy efficiency. Collectively our work underscores the importance of membrane–cytoskeletal anchoring and suggests conserved molecular mechanisms for this activity. PMID:26912796
Dorn, Jonas F; Zhang, Li; Phi, Tan-Trao; Lacroix, Benjamin; Maddox, Paul S; Liu, Jian; Maddox, Amy Shaub
2016-04-15
During cytokinesis, the cell undergoes a dramatic shape change as it divides into two daughter cells. Cell shape changes in cytokinesis are driven by a cortical ring rich in actin filaments and nonmuscle myosin II. The ring closes via actomyosin contraction coupled with actin depolymerization. Of interest, ring closure and hence the furrow ingression are nonconcentric (asymmetric) within the division plane across Metazoa. This nonconcentricity can occur and persist even without preexisting asymmetric cues, such as spindle placement or cellular adhesions. Cell-autonomous asymmetry is not explained by current models. We combined quantitative high-resolution live-cell microscopy with theoretical modeling to explore the mechanistic basis for asymmetric cytokinesis in theCaenorhabditis eleganszygote, with the goal of uncovering basic principles of ring closure. Our theoretical model suggests that feedback among membrane curvature, cytoskeletal alignment, and contractility is responsible for asymmetric cytokinetic furrowing. It also accurately predicts experimental perturbations of conserved ring proteins. The model further suggests that curvature-mediated filament alignment speeds up furrow closure while promoting energy efficiency. Collectively our work underscores the importance of membrane-cytoskeletal anchoring and suggests conserved molecular mechanisms for this activity. PMID:26912796
A beginner's guide to writing the nursing conceptual model-based theoretical rationale.
Gigliotti, Eileen; Manister, Nancy N
2012-10-01
Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Rappe, Andrew M.
2016-01-01
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
NASA Astrophysics Data System (ADS)
Weber, Tobias K. D.; Riedel, Thomas
2015-04-01
Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.
Stellar granulation as seen in disk-integrated intensity. I. Simplified theoretical modeling
NASA Astrophysics Data System (ADS)
Samadi, R.; Belkacem, K.; Ludwig, H.-G.
2013-11-01
Context. Solar granulation has been known for a long time to be a surface manifestation of convection. The space-borne missions CoRoT and Kepler enable us to observe the signature of this phenomena in disk-integrated intensity on a large number of stars. Aims: The space-based photometric measurements show that the global brightness fluctuations and the lifetime associated with granulation obeys characteristic scaling relations. We thus aimed at providing simple theoretical modeling to reproduce these scaling relations, and subsequently at inferring the physical properties of granulation across the Hertzsprung-Russell diagram. Methods: We developed a simple 1D theoretical model. The input parameters were extracted from 3D hydrodynamical models of the surface layers of stars, and the free parameters involved in the model were calibrated with solar observations. Two different prescriptions for representing the Fourier transform of the time-correlation of the eddy velocity were compared: a Lorentzian and an exponential form. Finally, we compared our theoretical prediction with 3D radiative hydrodynamical (RHD) numerical modeling of stellar granulation (hereafter ab initio approach). Results: Provided that the free parameters are appropriately adjusted, our theoretical model reproduces the observed solar granulation spectrum quite satisfactorily; the best agreement is obtained for an exponential form. Furthermore, our model results in granulation spectra that agree well with the ab initio approach using two 3D RHD models that are representative of the surface layers of an F-dwarf and a red-giant star. Conclusions: We have developed a theoretical model that satisfactory reproduces the solar granulation spectrum and gives results consistent with the ab initio approach. The model is used in a companion paper as theoretical framework for interpretating the observed scaling relations. Appendices are available in electronic form at http://www.aanda.org
Nurses' self-relation--becoming theoretically competent: the SAUC model for confirming nursing.
Gustafsson, Barbro; Willman, Ania M
2003-07-01
The purpose of this study was to acquire an understanding of how nurses' self-relation (view of themselves as nurses) was influenced in connection with implementation of a nursing theory, the sympathy-acceptance-understanding-competence model for confirming nursing. This model was developed by Gustafsson and Pörn. Twenty-two nurses' written statements evaluating mentoring during the six-month implementation process in elder care, were analyzed hermeneutically with the hypothetic-deductive method. An action-theoretic and confirmatory approach was used for facilitating theoretically specified hypotheses. The nurses increased their ability to describe nursing theoretically and gained a foundation of common nursing values. The results provided an understanding of how nurses' self-relation was strengthened by becoming theoretically competent. PMID:12876885
Nurses' self-relation--becoming theoretically competent: the SAUC model for confirming nursing.
Gustafsson, Barbro; Willman, Ania M
2003-07-01
The purpose of this study was to acquire an understanding of how nurses' self-relation (view of themselves as nurses) was influenced in connection with implementation of a nursing theory, the sympathy-acceptance-understanding-competence model for confirming nursing. This model was developed by Gustafsson and Pörn. Twenty-two nurses' written statements evaluating mentoring during the six-month implementation process in elder care, were analyzed hermeneutically with the hypothetic-deductive method. An action-theoretic and confirmatory approach was used for facilitating theoretically specified hypotheses. The nurses increased their ability to describe nursing theoretically and gained a foundation of common nursing values. The results provided an understanding of how nurses' self-relation was strengthened by becoming theoretically competent.
Single Droplet on Micro Square-Post Patterned Surfaces – Theoretical Model and Numerical Simulation
Zu, Y. Q.; Yan, Y. Y.
2016-01-01
In this study, the wetting behaviors of single droplet on a micro square-post patterned surface with different geometrical parameters are investigated theoretically and numerically. A theoretical model is proposed for the prediction of wetting transition from the Cassie to Wenzel regimes. In addition, due to the limitation of theoretical method, a numerical simulation is performed, which helps get a view of dynamic contact lines, detailed velocity fields, etc., even if the droplet size is comparable with the scale of the surface micro-structures. It is found that the numerical results of the liquid drop behaviours on the square-post patterned surface are in good agreement with the predicted values by the theoretical model. PMID:26775561
Single Droplet on Micro Square-Post Patterned Surfaces - Theoretical Model and Numerical Simulation.
Zu, Y Q; Yan, Y Y
2016-01-01
In this study, the wetting behaviors of single droplet on a micro square-post patterned surface with different geometrical parameters are investigated theoretically and numerically. A theoretical model is proposed for the prediction of wetting transition from the Cassie to Wenzel regimes. In addition, due to the limitation of theoretical method, a numerical simulation is performed, which helps get a view of dynamic contact lines, detailed velocity fields, etc., even if the droplet size is comparable with the scale of the surface micro-structures. It is found that the numerical results of the liquid drop behaviours on the square-post patterned surface are in good agreement with the predicted values by the theoretical model.
NASA Astrophysics Data System (ADS)
Malik, Arif Sultan
This work presents improved technology for attaining high-quality rolled metal strip. The new technology is based on an innovative method to model both the static and dynamic characteristics of rolling mill deflection, and it applies equally to both cluster-type and non cluster-type rolling mill configurations. By effectively combining numerical Finite Element Analysis (FEA) with analytical solid mechanics, the devised approach delivers a rapid, accurate, flexible, high-fidelity model useful for optimizing many important rolling parameters. The associated static deflection model enables computation of the thickness profile and corresponding flatness of the rolled strip. Accurate methods of predicting the strip thickness profile and strip flatness are important in rolling mill design, rolling schedule set-up, control of mill flatness actuators, and optimization of ground roll profiles. The corresponding dynamic deflection model enables solution of the standard eigenvalue problem to determine natural frequencies and modes of vibration. The presented method for solving the roll-stack deflection problem offers several important advantages over traditional methods. In particular, it includes continuity of elastic foundations, non-iterative solution when using pre-determined elastic foundation moduli, continuous third-order displacement fields, simple stress-field determination, the ability to calculate dynamic characteristics, and a comparatively faster solution time. Consistent with the most advanced existing methods, the presented method accommodates loading conditions that represent roll crowning, roll bending, roll shifting, and roll crossing mechanisms. Validation of the static model is provided by comparing results and solution time with large-scale, commercial finite element simulations. In addition to examples with the common 4-high vertical stand rolling mill, application of the presented method to the most complex of rolling mill configurations is demonstrated
Simple control-theoretic models of human steering activity in visually guided vehicle control
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
Leite, Fabio L; Bueno, Carolina C; Da Róz, Alessandra L; Ziemath, Ervino C; Oliveira, Osvaldo N
2012-10-08
The increasing importance of studies on soft matter and their impact on new technologies, including those associated with nanotechnology, has brought intermolecular and surface forces to the forefront of physics and materials science, for these are the prevailing forces in micro and nanosystems. With experimental methods such as the atomic force spectroscopy (AFS), it is now possible to measure these forces accurately, in addition to providing information on local material properties such as elasticity, hardness and adhesion. This review provides the theoretical and experimental background of afs, adhesion forces, intermolecular interactions and surface forces in air, vacuum and in solution.
Leite, Fabio L.; Bueno, Carolina C.; Da Róz, Alessandra L.; Ziemath, Ervino C.; Oliveira, Osvaldo N.
2012-01-01
The increasing importance of studies on soft matter and their impact on new technologies, including those associated with nanotechnology, has brought intermolecular and surface forces to the forefront of physics and materials science, for these are the prevailing forces in micro and nanosystems. With experimental methods such as the atomic force spectroscopy (AFS), it is now possible to measure these forces accurately, in addition to providing information on local material properties such as elasticity, hardness and adhesion. This review provides the theoretical and experimental background of AFS, adhesion forces, intermolecular interactions and surface forces in air, vacuum and in solution. PMID:23202925
A Theoretical Model for Thin Film Ferroelectric Coupled Microstripline Phase Shifters
NASA Technical Reports Server (NTRS)
Romanofsky, R. R.; Quereshi, A. H.
2000-01-01
Novel microwave phase shifters consisting of coupled microstriplines on thin ferroelectric films have been demonstrated recently. A theoretical model useful for predicting the propagation characteristics (insertion phase shift, dielectric loss, impedance, and bandwidth) is presented here. The model is based on a variational solution for line capacitance and coupled strip transmission line theory.
The Road Not Taken: An Integrative Theoretical Model of Reading Disability.
ERIC Educational Resources Information Center
Spear-Swerling, Louise; Sternberg, Robert J.
1994-01-01
This article describes a theoretical model of reading disability that integrates research findings in cognitive psychology, reading, and education. The model identifies four patterns of reading disability: (1) nonalphabetic readers, (2) compensatory readers, (3) nonautomatic readers, and (4) readers delayed in the acquisition of word recognition…
Cross-Cultural Teamwork in End User Computing: A Theoretical Model.
ERIC Educational Resources Information Center
Bento, Regina F.
1995-01-01
Presents a theoretical model explaining how cultural influences may affect the open, dynamic system of a cross-cultural, end-user computing team. Discusses the relationship between cross-cultural factors and various parts of the model such as: input variables, the system itself, outputs, and implications for the management of such teams. (JKP)
ERIC Educational Resources Information Center
Hsieh, Pei-Hsuan; Sullivan, Jeremy R.; Sass, Daniel A.; Guerra, Norma S.
2012-01-01
Research has identified factors associated with academic success by evaluating relations among psychological and academic variables, although few studies have examined theoretical models to understand the complex links. This study used structural equation modeling to investigate whether the relation between test anxiety and final course grades was…
Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test
ERIC Educational Resources Information Center
Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.
2006-01-01
A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…
NASA Astrophysics Data System (ADS)
Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.
2015-12-01
There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
Kosakovsky Pond, Sergei L; Posada, David; Stawiski, Eric; Chappey, Colombe; Poon, Art F Y; Hughes, Gareth; Fearnhill, Esther; Gravenor, Mike B; Leigh Brown, Andrew J; Frost, Simon D W
2009-11-01
Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1) are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial) sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL) procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol) sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5%) fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance of accurate
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
ERIC Educational Resources Information Center
Johnson, Marcus L.; Taasoobshirazi, Gita; Kestler, Jessica L.; Cordova, Jackie R.
2015-01-01
We tested a theoretical model of college students' ratings of messengers of resilience and models of resilience, students' own perceived resilience, regulatory strategy use and achievement. A total of 116 undergraduates participated in this study. The results of a path analysis indicated that ratings of models of resilience had a direct effect on…
NASA Astrophysics Data System (ADS)
Long, D.; Singh, V. P.; Scanlon, B. R.
2011-12-01
Satellite-based triangle models for evapotranspiration (ET) are unique in interpreting the contextual relationship between Normalized Difference Vegetation Index (NDVI)/factional vegetation cover (fc) and surface radiative temperature (Trad) to deduce evaporative fraction (EF) and ET across large heterogeneous areas. The outputs and performance of some satellite-based ET algorithms may be dependent on the domain of a study site being considered and the resolution of satellite imagery being used. These attributes are referred to as domain dependence and resolution dependence. To unravel the domain and resolution dependencies of the triangle models and test the utility of the triangle models using high spatial resolution images, the triangle models were applied to areas with progressively growing domains and to Landsat TM/ETM+ and MODIS sensors, respectively, at the Soil Moisture-Atmosphere Coupling Experiment (SMACEX) site in central Iowa, U.S. on Day of Year (DOY) 174 and 182 in year 2002. Results indicate that the triangle models can be domain-dependent and resolution-dependent, showing large uncertainties in the evaporative fraction estimates in terms of a Mean Absolute Percentage Difference (MAPD) up to ~50%. We derived the theoretical boundaries of the fc-Trad space to restrain the domain and resolution dependencies of the triangle models. The theoretical warm edge was derived by solving for temperatures of the driest bare surface and the fully vegetated surface with the largest water stress implicit in both radiation budget and energy balance equations. The areal average temperature can be taken as the theoretical cold edge. The triangle models appear to perform well across large areas but fail to predict the evaporative fraction over small areas. However, performance of the triangle models across small domains can be improved by incorporating the theoretical boundaries. Combining the triangle models with the theoretical boundaries can effectively reduce
Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D
2014-06-01
Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.
Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy
2014-07-01
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.
Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu
2015-01-01
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement
Depression in Black Single Mothers: A Test of a Theoretical Model.
Atkins, Rahshida
2015-06-01
The aim of this study was to test a theoretical model of depression for Black single mothers. Participants were 208 Black single mothers, aged 18 to 45, recruited from community settings. The a priori over-identified recursive theoretical model was tested via the LISREL 9.1 program using a maximum likelihood estimation for structural equation modeling. The chi-square indicated that there was an excellent fit of the model with the data, χ(2)(1, N = 208) = .05, p = .82. The fit indices for the model were excellent. Path coefficients were statistically significant for seven out of eight of the direct paths within the model (p < .05). The two indirect paths were also statistically significant. The theory was supported and can be applied by health care professionals when working with depressed Black single mothers.
The calculation of theoretical chromospheric models and predicted OSO 1 spectra
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1975-01-01
Theoretical solar chromospheric and photospheric models are computed for use in analyzing OSO 8 spectra. The Vernazza, Avrett, and Loeser (1976) solar model is updated and self-consistent non-LTE number densities for H I, He I, He II, C I, Mg I, Al I, Si I, and H(-) are produced. These number densities are used in the calculation of a theoretical solar spectrum from 90 to 250 nm, including approximately 7000 lines in non-LTE. More than 60,000 lines of other elements are treated with approximate source functions.
NASA Technical Reports Server (NTRS)
Shimazaki, T.; Wuebbles, D. J.
1973-01-01
Calculations based on an improved, time-dependent theoretical model for the vertical ozone density distribution in the upper atmosphere are shown to clarify the cause and determine the appearance precondition for the depression at the 70-85 km altitude region in the ozone density distribution suggested by several theoretical models and only sometimes experimentally observed. It is concluded that the depression develops at night through the effects of hydrogen-oxygen and nitrogen-oxygen reactions, as well as those of eddy diffusion transports.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi
2016-10-01
As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.
NASA Astrophysics Data System (ADS)
Movassaghi, Babak; Rasche, Volker; Viergever, Max A.; Niessen, Wiro J.
2004-05-01
For the diagnosis of ischemic heart disease, accurate quantitative analysis of the coronary arteries is important. In coronary angiography, a number of projections is acquired from which 3D models of the coronaries can be reconstructed. A signifcant limitation of the current 3D modeling procedures is the required user interaction for defining the centerlines of the vessel structures in the 2D projections. Currently, the 3D centerlines of the coronary tree structure are calculated based on the interactively determined centerlines in two projections. For every interactively selected centerline point in a first projection the corresponding point in a second projection has to be determined interactively by the user. The correspondence is obtained based on the epipolar-geometry. In this paper a method is proposed to retrieve all the information required for the modeling procedure, by the interactive determination of the 2D centerline-points in only one projection. For every determined 2D centerline-point the corresponding 3D centerline-point is calculated by the analysis of the 1D gray value functions of the corresponding epipolarlines in space for all available 2D projections. This information is then used to build a 3D representation of the coronary arteries using coronary modeling techniques. The approach is illustrated on the analysis of calibrated phantom and calibrated coronary projection data.
A theoretical model to describe progressions and regressions for exercise rehabilitation.
Blanchard, Sam; Glasgow, Phil
2014-08-01
This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician.
Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model
NASA Technical Reports Server (NTRS)
Goradia, C.; Vaughn, J.; Baraona, C. R.
1980-01-01
A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).
ERIC Educational Resources Information Center
Monroe, Scott M.; Mineka, Susan
2008-01-01
Our commentary was intended to stimulate discussion about what we perceive to be shortcomings of the mnemonic model and its research base, in the hope of shedding some light on key questions for understanding posttraumatic stress disorder (PTSD). In our view, Berntsen, Rubin, and Bohni have responded only to what they perceive to be shortcomings…
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.
A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.
Isard, Scott A; Chamecki, Marcelo
2016-03-01
A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design. PMID:26595112
A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.
Isard, Scott A; Chamecki, Marcelo
2016-03-01
A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design.
How parents choose to use CAM: a systematic review of theoretical models
Lorenc, Ava; Ilan-Clarke, Yael; Robinson, Nicola; Blair, Mitch
2009-01-01
Background Complementary and Alternative Medicine (CAM) is widely used throughout the UK and the Western world. CAM is commonly used for children and the decision-making process to use CAM is affected by numerous factors. Most research on CAM use lacks a theoretical framework and is largely based on bivariate statistics. The aim of this review was to identify a conceptual model which could be used to explain the decision-making process in parental choice of CAM. Methods A systematic search of the literature was carried out. A two-stage selection process with predetermined inclusion/exclusion criteria identified studies using a theoretical framework depicting the interaction of psychological factors involved in the CAM decision process. Papers were critically appraised and findings summarised. Results Twenty two studies using a theoretical model to predict CAM use were included in the final review; only one examined child use. Seven different models were identified. The most commonly used and successful model was Andersen's Sociobehavioural Model (SBM). Two papers proposed modifications to the SBM for CAM use. Six qualitative studies developed their own model. Conclusion The SBM modified for CAM use, which incorporates both psychological and pragmatic determinants, was identified as the best conceptual model of CAM use. This model provides a valuable framework for future research, and could be used to explain child CAM use. An understanding of the decision making process is crucial in promoting shared decision making between healthcare practitioners and parents and could inform service delivery, guidance and policy. PMID:19386106
Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S
2015-11-01
Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.
Scott, Serena J.; Prakash, Punit; Salgaonkar, Vasant; Jones, Peter D.; Cam, Richard N.; Han, Misung; Rieke, Viola; Burdette, E. Clif; Diederich, Chris J.
2014-01-01
Purpose The objectives of this study were to develop numerical models of interstitial ultrasound ablation of tumors within or adjacent to bone, to evaluate model performance through theoretical analysis, and to validate the models and approximations used through comparison to experiments. Methods 3D transient biothermal and acoustic finite element models were developed, employing four approximations of 7 MHz ultrasound propagation at bone/soft tissue interfaces. The various approximations considered or excluded reflection, refraction, angle-dependence of transmission coefficients, shear mode conversion, and volumetric heat deposition. Simulations were performed for parametric and comparative studies. Experiments within ex vivo tissues and phantoms were performed to validate the models by comparison to simulations. Temperature measurements were conducted using needle thermocouples or MR temperature imaging (MRTI). Finite element models representing heterogeneous tissue geometries were created based on segmented MR images. Results High ultrasound absorption at bone/soft tissue interfaces increased the volumes of target tissue that could be ablated. Models using simplified approximations produced temperature profiles closely matching both more comprehensive models and experimental results, with good agreement between 3D calculations and MRTI. The correlation coefficients between simulated and measured temperature profiles in phantoms ranged from 0.852 to 0.967 (p-value < 0.01) for the four models. Conclusions Models using approximations of interstitial ultrasound energy deposition around bone/soft tissue interfaces produced temperature distributions in close agreement with comprehensive simulations and experimental measurements. These models may be applied to accurately predict temperatures produced by interstitial ultrasound ablation of tumors near and within bone, with applications toward treatment planning. PMID:24102393
ERIC Educational Resources Information Center
Newman, Tim A.
2012-01-01
This study described the current state of principal salaries in South Carolina and compared the salaries of similar size schools by specific report card performance and demographic variables. Based on the findings, theoretical models were proposed, and comparisons were made with current salary data. School boards, human resource personnel and…
Dreber, Anna; Rand, David G
2012-02-01
Guala argues that there is a mismatch between most laboratory experiments on costly punishment and behavior in the field. In the lab, experimental designs typically suppress retaliation. The same is true for most theoretical models of the co-evolution of costly punishment and cooperation, which a priori exclude the possibility of defectors punishing cooperators.
ERIC Educational Resources Information Center
Chen, Ang; Hancock, Gregory R.
2006-01-01
Adolescent physical inactivity has risen to an alarming rate. Several theoretical frameworks (models) have been proposed and tested in school-based interventions. The results are mixed, indicating a similar weakness as that observed in community-based physical activity interventions (Baranowski, Lin, Wetter, Resnicow, & Hearn, 1997). The…
Unconscious Determinants of Career Choice and Burnout: Theoretical Model and Counseling Strategy.
ERIC Educational Resources Information Center
Malach-Pines, Ayala; Yafe-Yanai, Oreniya
2001-01-01
Proposes a psychodynamic-existential perspective as a theoretical model that explains career burnout and serves as a basis for a counseling strategy. According to existential theory, the root of career burnout lies in people's need to find existential significance in their life and their sense that their work does not provide it. (Contains 40…
A Game-Theoretic Model of Grounding for Referential Communication Tasks
ERIC Educational Resources Information Center
Thompson, William
2009-01-01
Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…
ERIC Educational Resources Information Center
Balmer, Dorene F.; Richards, Boyd F.; Varpio, Lara
2015-01-01
Using Bourdieu's theoretical model as a lens for analysis, we sought to understand how students experience the undergraduate medical education (UME) milieu, focusing on how they navigate transitions from the preclinical phase, to the major clinical year (MCY), and to the preparation for residency phase. Twenty-two medical students participated in…