Sample records for nilsson model

  1. Nilsson's Power Model connecting speed and road trauma: applicability by road type and alternative models for urban roads.

    PubMed

    Cameron, M H; Elvik, R

    2010-11-01

    Nilsson (1981) proposed power relationships connecting changes in traffic speeds with changes in road crashes at various levels of injury severity. Increases in fatal crashes are related to the 4(th) power of the increase in mean speed, increases in serious casualty crashes (those involving death or serious injury) according to the 3(rd) power, and increases in casualty crashes (those involving death or any injury) according to the 2(nd) power. Increases in numbers of crash victims at cumulative levels of injury severity are related to the crash increases plus higher powers predicting the number of victims per crash. These relationships are frequently applied in OECD countries to estimate road trauma reductions resulting from expected speed reductions. The relationships were empirically derived based on speed changes resulting from a large number of rural speed limit changes in Sweden during 1967-1972. Nilsson (2004) noted that there had been very few urban speed limit changes studied to test his power model. This paper aims to test the assumption that the model is equally applicable in all road environments. It was found that the road environment is an important moderator of Nilsson's power model. While Nilsson's model appears satisfactory for rural highways and freeways, the model does not appear to be directly applicable to traffic speed changes on urban arterial roads. The evidence of monotonically increasing powers applicable to changes in road trauma at increasing injury severity levels with changes in mean speed is weak. The estimated power applicable to serious casualties on urban arterial roads was significantly less than that on rural highways, which was also significantly less than that on freeways. Alternative models linking the parameters of speed distributions with road trauma are reviewed and some conclusions reached for their use on urban roads instead of Nilsson's model. Further research is needed on the relationships between serious road trauma

  2. Angular momentum projection for a Nilsson mean-field plus pairing model

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Pan, Feng; Launey, Kristina D.; Luo, Yan-An; Draayer, J. P.

    2016-06-01

    The angular momentum projection for the axially deformed Nilsson mean-field plus a modified standard pairing (MSP) or the nearest-level pairing (NLP) model is proposed. Both the exact projection, in which all intrinsic states are taken into consideration, and the approximate projection, in which only intrinsic states with K = 0 are taken in the projection, are considered. The analysis shows that the approximate projection with only K = 0 intrinsic states seems reasonable, of which the configuration subspace considered is greatly reduced. As simple examples for the model application, low-lying spectra and electromagnetic properties of 18O and 18Ne are described by using both the exact and approximate angular momentum projection of the MSP or the NLP, while those of 20Ne and 24Mg are described by using the approximate angular momentum projection of the MSP or NLP.

  3. Spectroscopic factors in the N =20 island of inversion: The Nilsson strong-coupling limit

    NASA Astrophysics Data System (ADS)

    Macchiavelli, A. O.; Crawford, H. L.; Campbell, C. M.; Clark, R. M.; Cromaz, M.; Fallon, P.; Jones, M. D.; Lee, I. Y.; Richard, A. L.; Salathe, M.

    2017-11-01

    Spectroscopic factors, extracted from one-neutron knockout and Coulomb dissociation reactions, for transitions from the ground state of 33Mg to the ground-state rotational band in 32Mg, and from 32Mg to low-lying negative-parity states in 31Mg, are interpreted within the rotational model. Associating the ground state of 33Mg and the negative-parity states in 31Mg with the 3/2 [321 ] Nilsson level, the strong coupling limit gives simple expressions that relate the amplitudes (Cj ℓ) of this wave function with the measured cross sections and derived spectroscopic factors (Sj ℓ). To obtain a consistent agreement with the data within this framework, we find that one requires a modified 3/2 [321 ] wave function with an increased contribution from the spherical 2 p3 /2 orbit as compared to a standard Nilsson calculation. This is consistent with the findings of large-scale shell model calculations and can be traced to weak binding effects that lower the energy of low-ℓ orbitals.

  4. Empirical p-n interactions, the synchronized filling of Nilsson orbitals, and emergent collectivity

    NASA Astrophysics Data System (ADS)

    Cakirli, R. B.

    2014-09-01

    The onset of collectivity and deformation, changes to the single particle energies and magic numbers and so on are strongly influenced by, for example, proton (p) and neutron (n) interactions inside atomic nuclei. Experimentally, using binding energies (or masses), one can extract an average p-n interaction between the last two protons and the last two neutrons, called δVpn. We have studied δVpn values using calculations of spatial overlaps between p and n Nilsson orbitals, considering different deformations, for the Z= 50-82, N= 82-126 shells, and comparison of these theoretical results with experimental δVpn values. Our results show that enhanced valence p-n interactions are closely correlated with the development of collectivity, shape changes, and the saturation of deformation in nuclei. We note that the difference of the Nilsson quantum numbers of the last filled Nilsson p and n orbitals, has a special relation, 0[110], in which they differ by only a single quantum in the z-direction, for those nuclei where δVpn is largest for each Z in medium mass and heavy nuclei. The synchronised filling of such orbital pairs correlates with the emergence of collectivity.

  5. Nilsson diagrams for light neutron-rich nuclei with weakly-bound neutrons

    NASA Astrophysics Data System (ADS)

    Hamamoto, Ikuko

    2007-11-01

    Using Woods-Saxon potentials and the eigenphase formalism for one-particle resonances, one-particle bound and resonant levels for neutrons as a function of quadrupole deformation are presented, which are supposed to be useful for the interpretation of spectroscopic properties of some light neutron-rich nuclei with weakly bound neutrons. Compared with Nilsson diagrams in textbooks that are constructed using modified oscillator potentials, we point out a systematic change of the shell structure in connection with both weakly bound and resonant one-particle levels related to small orbital angular momenta ℓ. Then, it is seen that weakly bound neutrons in nuclei such as C15-19 and Mg33-37 may prefer being deformed as a result of the Jahn-Teller effect, due to the near degeneracy of the 1d5/2-2s1/2 levels and the 1f7/2-2p3/2 levels in the spherical potential, respectively. Furthermore, the absence of some one-particle resonant levels compared with the Nilsson diagrams in textbooks is illustrated.

  6. Probability theory plus noise: Replies to Crupi and Tentori (2016) and to Nilsson, Juslin, and Winman (2016).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-01-01

    A standard assumption in much of current psychology is that people do not reason about probability using the rules of probability theory but instead use various heuristics or "rules of thumb," which can produce systematic reasoning biases. In Costello and Watts (2014), we showed that a number of these biases can be explained by a model where people reason according to probability theory but are subject to random noise. More importantly, that model also predicted agreement with probability theory for certain expressions that cancel the effects of random noise: Experimental results strongly confirmed this prediction, showing that probabilistic reasoning is simultaneously systematically biased and "surprisingly rational." In their commentaries on that paper, both Crupi and Tentori (2016) and Nilsson, Juslin, and Winman (2016) point to various experimental results that, they suggest, our model cannot explain. In this reply, we show that our probability theory plus noise model can in fact explain every one of the results identified by these authors. This gives a degree of additional support to the view that people's probability judgments embody the rational rules of probability theory and that biases in those judgments can be explained as simply effects of random noise. (c) 2015 APA, all rights reserved).

  7. Creating a Framework for Holistic Assessment of Aesthetics: A Response to Nilsson and Axelsson (2015) on Attributes of Aesthetic Quality of Textile Quality1.

    PubMed

    Carbon, Claus-Christian

    2016-02-01

    Nilsson and Axelsson (2015) made an important contribution by linking recent scientific approaches from the field of empirical aesthetics with everyday demands of museum conservators of deciding which items to be preserved or not. The authors made an important effort in identifying the valuable candidates of variables - but focused on visual properties only and on quite high-expertise aspects of aesthetic quality based on very sophisticated evaluations. The present article responds to the target paper by developing the outline of a more holistic approach for future research as a kind of framework that should assist a multi-modal approach, mainly including tactile sense. © The Author(s) 2016.

  8. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  9. Psychometric Testing of the Chinese-Version Glover-Nilsson Smoking Behavioral Questionnaire (GN-SBQ-C) for the Identification of Nicotine Dependence in Adult Smokers in Taiwan.

    PubMed

    Chen, Shu-Ching; Chen, Hsiu-Fang; Peng, Hsi-Ling; Lee, Li-Yun; Chiang, Ting-Yu; Chiu, Hui-Chuan

    2017-04-01

    The purposes of this study were to evaluate the psychometric properties, reliability, and validity of the Chinese-version Glover-Nilsson Smoking Behavioral Questionnaire (GN-SBQ-C) and assess the behavioral nicotine dependence among community-dwelling adult smokers in Taiwan. The methods used were survey design, administration, and validation. A total of 202 adult smokers completed a survey to assess behavioral dependence, nicotine dependence, depression, social support, and demographic and smoking characteristics. Data analysis included descriptive statistics, internal consistency reliability, t test, exploratory factor analysis, independent t test, and Pearson product moment correlation. The results showed that (1) the GN-SBQ-C has good internal consistency reliability and stability (2-week test-retest reliability); (2) the extracted one factor explained 41.80 % of the variance, indicating construct validity; (3) the scale has acceptable concurrent validity, with significant positive correlation between the GN-SBQ-C and nicotine dependence, depression, and time smoking and negative correlation between the GN-SBQ-C and age and exercise habit; and (4) the instrument has discriminant validity, supported by significant differences between those with high and low-to-moderate nicotine dependence, smokers greater than 43 years old and those 43 years old and younger, and those who smoked 10 years or less and those smoking more than 10 years. The 11-item GN-SBQ-C has satisfactory psychometric properties when applied in a sample of Taiwanese adult smokers. The scale is feasible and valid to use to assess smoking behavioral dependence.

  10. Proxy-SU(3) symmetry in heavy deformed nuclei

    NASA Astrophysics Data System (ADS)

    Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.

    2017-06-01

    Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.

  11. Strange history: the fall of Rome explained in Hereditas.

    PubMed

    Bengtsson, Bengt O

    2014-12-01

    In 1921 Hereditas published an article on the fall of Rome written by the famous classical scholar Martin P:son Nilsson. Why was a paper on this unexpected topic printed in the newly founded journal? To Nilsson, the demise of the Roman Empire was explained by the "bastardization" occurring between "races" from different parts of the realm. Offspring from mixed couples were of a less stable "type" than their parents, due to the breaking up by recombination of the original hereditary dispositions, which led to a general loss of competence to rule and govern. Thus, the "hardness" of human genes, together with their recombination, was - according to Nilsson - the main cause of the fall of Rome. Nilsson's argument is not particularly convincingly presented. Human "races" are taken to have the same genetic structure as inbred crop strains, and Nilsson believes in a metaphysical unity between the individual and the race to which it belongs. However, in my view, Martin P:son Nilsson and his friend Herman Nilsson-Ehle had wider aims with the article than to explain a historical event. The article can be read as indicating strong support from the classical human sciences to the ambitious new science of genetics. Support is also transferred from genetics to the conservative worldview, where the immutability and inflexibility of the Mendelian genes are used to strengthen the wish for greater stability in politics and life. The strange article in Hereditas can, thus, be read as an early instance in the - still ongoing - tug-of-war between the conservative and the liberal ideological poles over how genetic results best are socially interpreted. © 2015 The Authors.

  12. Revisiting the JDL Model for Information Exploitation

    DTIC Science & Technology

    2013-07-01

    High-Level Information Fusion Management and Systems Design, Artech House, Norwood, MA, 2012. [10] E. Blasch, D. A. Lambert, P. Valin , M. M. Kokar...Fusion – Fusion2012 Panel Discussion,” Int. Conf. on Info Fusion, 2012. [29] E. P. Blasch, P. Valin , A-L. Jousselme, et al., “Top Ten Trends in High...P. Valin , E. Bosse, M. Nilsson, J. Van Laere, et al., “Implication of Culture: User Roles in Information Fusion for Enhanced Situational

  13. Nuclear quadrupole resonance lineshape analysis for different motional models: Stochastic Liouville approach

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.

    2011-12-01

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.

  14. The Tumor Suppressor Actions of the Vitamin D Receptor in Skin

    DTIC Science & Technology

    2013-08-01

    S. Yu, et al., Basal cell carcinomas inmiceoverexpressing Gli2 in skin, Nature Genetics 24 (3) (2000) 216–217. 61] M. Nilsson, A.B. Unden, D. Krause ...Meineke V, Gartner BC, Wolfgang T, Holick MF, Reichrath J. Analysis of the vitamin D system in cutaneous malignancies. Recent Results Cancer Res...10700170] 63. Nilsson M, Unden AB, Krause D, Malmqwist U, Raza K, Zaphiropoulos PG, Toftgard R. Induction of basal cell carcinomas and trichoepitheliomas

  15. Issues in Adaptive Planning

    DTIC Science & Technology

    1986-06-30

    approach to the application of theorem proving to problem solving, Aritificial Intelligence 2 (1Q71), 18Q- 208. 4. Fikes, R., Hart, P. and Nilsson, N...by emphasizing the structure of knowledge. 1.2. Planning Literature The earliest work in planning in Artificial Intelligence grew out of the work on...References 1. Newell, A., Artificial Intelligence and the concept of mind, in Computer models of thought and language, Schank, R. and Colby, K. (editor

  16. Isomer-delayed gamma-ray spectroscopy of neutron-rich 166Tb

    NASA Astrophysics Data System (ADS)

    Gurgi, L. A.; Regan, P. H.; Söderström, P.-A.; Watanabe, H.; Walker, P. M.; Podolyák, Zs.; Nishimura, S.; Berry, T. A.; Doornenbal, P.; Lorusso, G.; Isobe, T.; Baba, H.; Xu, Z. Y.; Sakurai, H.; Sumikama, T.; Catford, W. N.; Bruce, A. M.; Browne, F.; Lane, G. J.; Kondev, F. G.; Odahara, A.; Wu, J.; Liu, H. L.; Xu, F. R.; Korkulu, Z.; Lee, P.; Liu, J. J.; Phong, V. H.; Yagi, A.; Zhang, G. X.; Alharbi, T.; Carroll, R. J.; Chae, K. Y.; Dombradi, Zs.; Estrade, A.; Fukuda, N.; Griffin, C.; Ideguchi, E.; Inabe, N.; Kanaoka, H.; Kojouharov, I.; Kubo, T.; Kubono, S.; Kurz, N.; Kuti, I.; Lalkovski, S.; Lee, E. J.; Lee, C. S.; Lotay, G.; Moon, C. B.; Nishizuka, I.; Nita, C. R.; Patel, Z.; Roberts, O. J.; Schaffner, H.; Shand, C. M.; Suzuki, H.; Takeda, H.; Terashima, S.; Vajta, Zs.; Kanaya, S.; Valiente-Dobòn, J. J.

    2017-09-01

    This short paper presents the identification of a metastable, isomeric-state decay in the neutron-rich odd-odd, prolate-deformed nucleus 166Tb. The nucleus of interest was formed using the in-flight fission of a 345 MeV per nucleon 238U primary beam at the RIBF facility, RIKEN, Japan. Gamma-ray transitions decaying from the observed isomeric states in 166Tb were identified using the EURICA gamma-ray spectrometer, positioned at the final focus of the BigRIPS fragments separator. The current work identifies a single discrete gamma-ray transition of energy 119 keV which de-excites an isomeric state in 166Tb with a measured half-life of 3.5(4) μs. The multipolarity assignment for this transition is an electric dipole and is made on the basis internal conversion and decay lifetime arguments. Possible two quasi-particle Nilsson configurations for the initial and final states which are linked by this transition in 166Tb are made on the basis of comparison with Blocked BCS Nilsson calculations, with the predicted ground state configuration for this nucleus arising from the coupling of the v(1-/2)?[521] and ? π(3+/2) Nilsson orbitals.

  17. Isomer-delayed gamma-ray spectroscopy of neutron-rich 166Tb

    DOE PAGES

    Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.; ...

    2017-09-13

    Here, this short paper presents the identification of a metastable, isomeric-state decay in the neutron-rich odd-odd, prolate-deformed nucleus 166Tb. The nucleus of interest was formed using the in-flight fission of a 345 MeV per nucleon 238U primary beam at the RIBF facility, RIKEN, Japan. Gamma-ray transitions decaying from the observed isomeric states in 166Tb were identified using the EURICA gamma-ray spectrometer, positioned at the final focus of the BigRIPS fragments separator. The current work identifies a single discrete gamma-ray transition of energy 119 keV which de-excites an isomeric state in 166Tb with a measured half-life of 3.5(4) μs. The multipolaritymore » assignment for this transition is an electric dipole and is made on the basis internal conversion and decay lifetime arguments. Possible two quasi-particle Nilsson configurations for the initial and final states which are linked by this transition in 166Tb are made on the basis of comparison with Blocked BCS Nilsson calculations, with the predicted ground state configuration for this nucleus arising from the coupling of the v(1-/2)[521] and π(3+/2) Nilsson orbitals.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.

    Here, this short paper presents the identification of a metastable, isomeric-state decay in the neutron-rich odd-odd, prolate-deformed nucleus 166Tb. The nucleus of interest was formed using the in-flight fission of a 345 MeV per nucleon 238U primary beam at the RIBF facility, RIKEN, Japan. Gamma-ray transitions decaying from the observed isomeric states in 166Tb were identified using the EURICA gamma-ray spectrometer, positioned at the final focus of the BigRIPS fragments separator. The current work identifies a single discrete gamma-ray transition of energy 119 keV which de-excites an isomeric state in 166Tb with a measured half-life of 3.5(4) μs. The multipolaritymore » assignment for this transition is an electric dipole and is made on the basis internal conversion and decay lifetime arguments. Possible two quasi-particle Nilsson configurations for the initial and final states which are linked by this transition in 166Tb are made on the basis of comparison with Blocked BCS Nilsson calculations, with the predicted ground state configuration for this nucleus arising from the coupling of the v(1-/2)[521] and π(3+/2) Nilsson orbitals.« less

  19. Genetics Home Reference: ataxia-pancytopenia syndrome

    MedlinePlus

    ... brain that coordinates movement (the cerebellum ) and blood-forming cells in the bone marrow . The age when ... J, Gorcenco S, Rundberg Nilsson A, Ripperger T, Kokkonen H, Bryder D, Fioretos T, Henter JI, Möttönen M, ...

  20. CryoSat-2 Processing and Model Interpretation of Greenland Ice Sheet Volume Changes

    NASA Astrophysics Data System (ADS)

    Nilsson, J.; Gardner, A. S.; Sandberg Sorensen, L.

    2015-12-01

    CryoSat-2 was launched in late 2010 tasked with monitoring the changes of the Earth's land and sea ice. It carries a novel radar altimeter allowing the satellite to monitor changes in highly complex terrain, such as smaller ice caps, glaciers and the marginal areas of the ice sheets. Here we present on the development and validation of an independent elevation retrieval processing chain and respective elevation changes based on ESA's L1B data. Overall we find large improvement in both accuracy and precision over Greenland relative to ESA's L2 product when comparing against both airborne data and crossover analysis. The seasonal component and spatial sampling of the surface elevation changes where also compared against ICESat derived changes from 2003-2009. The comparison showed good agreement between the to product on a local scale. However, a global sampling bias was detected in the seasonal signal due to the clustering of CryoSat-2 data in higher elevation areas. The retrieval processing chain presented here does not correct for changes in surface scattering conditions and appears to be insensitive to the 2012 melt event (Nilsson et al., 2015). This in contrast to the elevation changes derived from ESA's L2 elevation product, which where found to be sensitive to the effects of the melt event. The positive elevation bias created by the event introduced a discrepancy between the two products with a magnitude of roughly 90 km3/year. This difference can directly be attributed to the differences in retracking procedure pointing to the importance of the retracking of the radar waveforms for altimetric volume change studies. Greenland 2012 melt event effects on CryoSat-2 radar altimetry./ Nilsson, Johan; Vallelonga, Paul Travis; Simonsen, Sebastian Bjerregaard; Sørensen, Louise Sandberg; Forsberg, René; Dahl-Jensen, Dorthe; Hirabayashi, Motohiro; Goto-Azuma, Kumiko; Hvidberg, Christine S.; Kjær, Helle A.; Satow, Kazuhide.

  1. Training Level, Acculturation, Role Ambiguity, and Multicultural Discussions in Training and Supervising International Counseling Students in the United States

    ERIC Educational Resources Information Center

    Ng, Kok-Mun; Smith, Shannon D.

    2012-01-01

    This research partially replicated Nilsson and Anderson's "Professional Psychology: Research and Practice" (2004) study on training and supervising international students. It investigated the relationships among international counseling students' training level, acculturation, supervisory working alliance (SWA), counseling self-efficacy (COSE),…

  2. 77 FR 43087 - Nomination of an In Vitro Test Method for the Identification of Contact Allergens: Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-23

    ... potential to produce allergic contact dermatitis (ACD). NICEATM also requests data generated using in vivo...]m MA, B[ouml]rje A, Luthman, K, Nilsson JLG. 2008. Allergic Contact Dermatitis--Formation... Identification of Contact Allergens: Request for Comments and Data AGENCY: Division of the National Toxicology...

  3. Quantifying Relationships between Water Quality and Aquatic Life Use Attainment using Sediment Profile Imagery (SPI) in Pensacola Bay

    EPA Science Inventory

    We present results from a monthly sediment and water quality survey of nine stations along a transect in the Pensacola Bay estuary spanning the salinity gradient from Escambia River to the Gulf of Mexico. We evaluated Benthic Habitat Quality (Nilsson and Rosenberg 1997) derived f...

  4. The EUA Institutional Evaluation Programme: An Account of Institutional Best Practices

    ERIC Educational Resources Information Center

    Rosa, Maria Joao; Cardoso, Sonia; Dias, Diana; Amaral, Alberto

    2011-01-01

    When evaluating the EUA Institutional Evaluation Programme (IEP), Nilsson "et al." emphasised the interest in creating a data bank on good practices derived from its reports that would contribute to disseminating examples of effective quality management practices and to supporting mutual learning among universities. In IEP, evaluated…

  5. Quantifying Relationships between Water Quality and Aquatic Life Use Attainment using Sediment Profile Imagery (SPI)

    EPA Science Inventory

    We present results from a monthly SPI and water quality survey of nine stations along a transect in the Pensacola Bay estuary spanning the salinity gradient from Escambia River to the Gulf of Mexico. We evaluated Benthic Habitat Quality (Nilsson and Rosenberg 1997) derived from s...

  6. The Expansion of the Education Sector in Sweden During the 20th Century.

    ERIC Educational Resources Information Center

    Ohlsson, Rolf

    1985-01-01

    Three investigations on quantitative changes in higher education in Sweden are described. In Anders Nilsson's dissertation, "Study Financing and Social Recruitment to Higher Education (1920-1976)," attention was focused on changes in college recruitment from 1920 until reforms in 1977; the effect of various college financing conditions…

  7. Acoustic Sensor Network Design for Position Estimation

    DTIC Science & Technology

    2009-05-01

    A., Pollock, S., Netter, B., and Low, B. S. 2005. Anisogamy, expenditure of reproductive effort, and the optimality of having two sexes. Operations...Research 53, 3, 560–567. Evans, M., Hastings, N., and Peacock , B. 2000. Statistical distributions. Ed. Wiley & Sons. New York. Feeney, L. and Nilsson, M

  8. Complexity of the Generalized Mover’s Problem.

    DTIC Science & Technology

    1985-01-01

    problem by workers in the robotics fields and in artificial intellegence , (for example [Nilson, 69], [Paul, 72], (Udupa, 77], [Widdoes, 74], [Lozano-Perez...Nilsson, "A mobile automation: An application of artificial intelligence techniques," Proceedings TJCAI-69, 509-520, 1969. . -7-- -17- C. O’Dunlaing, M

  9. Microscopic insight into the structure of gallium isotopes

    NASA Astrophysics Data System (ADS)

    Verma, Preeti; Sharma, Chetan; Singh, Suram; Bharti, Arun; Khosa, S. K.

    2012-07-01

    Projected Shell Model technique has been applied to odd-A71-81Ga nuclei with the deformed single-particle states generated by the standard Nilsson potential. Various nuclear structure quantities have been calculated with this technique and compared with the available experimental data in the present work. The known experimental data of the yrast bands in these nuclei are persuasively described and the band diagrams obtained for these nuclei show that the yrast bands in these odd-A Ga isotopes don't belong to the single intrinsic state only but also have multi-particle states. The back-bending in moment of inertia and the electric quadrupole transitions are also calculated.

  10. Helium-induced one-neutron transfer to levels in 162Dy

    NASA Astrophysics Data System (ADS)

    Andersen, E.; Helstrup, H.; Løvhøiden, G.; Thorsteinsen, T. F.; Guttormsen, M.; Messelt, S.; Tveter, T. S.; Hofstee, M. A.; Schippers, J. M.; van der Werf, S. Y.

    1992-12-01

    Levels in 162Dy have been studied in the 161Dy(α, 3He) and 163Dy( 3He, α) reactions with 50 MeV α- and 3He-beams from the KVI cyclotron in Groningen. The reaction products were analyzed in the QMG/2 magnetic spectrograph and registered in a two-dimensional detector system. The observed levels and cross sections are well described by the Nilsson model with the exception of the three levels at 1578, 1759 and 1990 keV. The present data combined with previous results strongly indicate that these levels are the spin-4, -6, and -8 members of the S-band.

  11. Relative properties of smooth terminating bands

    NASA Astrophysics Data System (ADS)

    Afanasjev, A. V.; Ragnarsson, I.

    1998-01-01

    The relative properties of smooth terminating bands observed in the A ∼ 110 mass region are studied within the effective alignment approach. Theoretical values of ietf are calculated using the configuration-dependent shell-correction model with the cranked Nilsson potential. Reasonable agreement with experiment shows that previous interpretations of these bands are consistent with the present study. Contrary to the case of superdeformed bands, the effective alignments of these bands deviate significantly from the pure single-particle alignments of the corresponding orbitals. This indicates that in the case of smooth terminating bands, the effects associated with changes in equilibrium deformations contribute significantly to the effective alignment.

  12. Geomorphic classification of rivers

    Treesearch

    J. M. Buffington; D. R. Montgomery

    2013-01-01

    Over the last several decades, environmental legislation and a growing awareness of historical human disturbance to rivers worldwide (Schumm, 1977; Collins et al., 2003; Surian and Rinaldi, 2003; Nilsson et al., 2005; Chin, 2006; Walter and Merritts, 2008) have fostered unprecedented collaboration among scientists, land managers, and stakeholders to better understand,...

  13. Where to cut, where to run : prospects for U.S. South softwood timber supplies and prices

    Treesearch

    Henry Spelter

    1999-01-01

    A review of market history shows that southern pine sawtimber stumpage prices have increased by over 150 percent in this decade (Timber Mart South). Concurrently, some (i.e. Cubbage and Abt (1996) Nilsson et al (1999)) have questioned the adequacy of southern timber supplies to meet projected demands, which are projected to increase by...

  14. Observation of high-spin bands with large moments of inertia in Xe 124

    DOE PAGES

    Nag, Somnath; Singh, A. K.; Hagemann, G. B.; ...

    2016-09-07

    In this paper, high-spin states in 124Xe have been populated using the 80Se( 48Ca, 4n) reaction at a beam energy of 207 MeV and high-multiplicity, γ-ray coincidence events were measured using the Gammasphere spectrometer. Six high-spin rotational bands with moments of inertia similar to those observed in neighboring nuclei have been observed. The experimental results are compared with calculations within the framework of the Cranked Nilsson-Strutinsky model. Finally, it is suggested that the configurations of the bands involve excitations of protons across the Z = 50 shell gap coupled to neutrons within the N = 50 - 82 shell ormore » excited across the N = 82 shell closure.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nag, Somnath; Singh, A. K.; Hagemann, G. B.

    In this paper, high-spin states in 124Xe have been populated using the 80Se( 48Ca, 4n) reaction at a beam energy of 207 MeV and high-multiplicity, γ-ray coincidence events were measured using the Gammasphere spectrometer. Six high-spin rotational bands with moments of inertia similar to those observed in neighboring nuclei have been observed. The experimental results are compared with calculations within the framework of the Cranked Nilsson-Strutinsky model. Finally, it is suggested that the configurations of the bands involve excitations of protons across the Z = 50 shell gap coupled to neutrons within the N = 50 - 82 shell ormore » excited across the N = 82 shell closure.« less

  16. Simultaneous Planning and Control for Autonomous Ground Vehicles

    DTIC Science & Technology

    2009-02-01

    these applications is called A * ( A -star), and it was originally developed by Hart, Nilsson, and Raphael [HAR68]. Their research presented the formal...sequence, rather than a dynamic programming approach. A * search is a technique originally developed for Artificial Intelligence 43 applications ... developed at the Center for Intelligent Machines and Robotics, serves as a platform for the implementation and testing discussed. autonomous

  17. Northeast Artificial Intelligence Consortium Annual Report. Volume 2. 1988 Discussing, Using, and Recognizing Plans (NLP)

    DTIC Science & Technology

    1989-10-01

    Encontro Portugues de Inteligencia Artificial (EPIA), Oporto, Portugal, September 1985. [15] N. J. Nilsson. Principles Of Artificial Intelligence. Tioga...FI1 F COPY () RADC-TR-89-259, Vol II (of twelve) Interim Report October 1969 AD-A218 154 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL...7a. NAME OF MONITORING ORGANIZATION Northeast Artificial Of p0ilcabe) Intelligence Consortium (NAIC) Rome_____ Air___ Development____Center

  18. Isomer spectroscopy of neutron-rich $$^{165,167}$$Tb

    DOE PAGES

    Gurgi, L. A.; Regan, P. H.; Soderstrom, P. -A.; ...

    2017-01-01

    We present information on the excited states in the prolate-deformed, neutron-rich nuclei 165,167Tb 100,102. The nuclei of interest were synthesised following in-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm 9Be target at the Radioactive Ion-Beam Factory (RIBF), RIKEN, Japan. The exotic nuclei were separated and identified event-by-event using the BigRIPS separator, with discrete energy gamma-ray decays from isomeric states with half-lives in the μs regime measured using the EURICA gamma-ray spectrometer. Metastable-state decays are identified in 165Tb and 167Tb and interpreted as arising from hindered E1 decay from the 72 –[523] single quasi-protonmore » Nilsson configuration to rotational states built on the 32 –[411] single quasi-proton ground state. Lastly, these data correspond to the first spectroscopic information in the heaviest, odd-A terbium isotopes reported to date and provide information on proton Nilsson configurations which reside close to the Fermi surface as the 170Dy doubly-midshell nucleus is approached.« less

  19. Laser System Usage in the Marine Environment: Applications and Environmental Considerations

    DTIC Science & Technology

    2010-12-01

    publications/pubs/index.html. Released by Bart Chadwick, Head Environmental Sciences Branch Under authority of Martin Machniak, Head Research...Nilsson and Lindstrom , 1983; Shelton, Gaten, and Chapman, 1985). Data on the effects of laser energy to corals also are lacking, although it can be...L. and M. Lindstrom . 1983. “Retinal Damage and Sensitivity Loss of a Light- Sensitive Crustacean Compound Eye (Cirolana borealis): Electron

  20. Operations Monitoring Assistant System Design

    DTIC Science & Technology

    1986-07-01

    Logic. Artificial Inteligence 25(1)::75-94. January.18. 41 -Nils J. Nilsson. Problem-Solving Methods In Artificli Intelligence. .klcG raw-Hill B3ook...operations monitoring assistant (OMA) system is designed that combines operations research, artificial intelligence, and human reasoning techniques and...KnowledgeCraft (from Carnegie Group), and 5.1 (from Teknowledze). These tools incorporate the best methods of applied artificial intelligence, and

  1. Emerging Concepts for Integrating Human and Environmental Water Needs in River Basin Management

    DTIC Science & Technology

    2005-09-01

    SCOWAR concluded (Naiman et al . 2002 ): “the major challenge to freshwater management is to place water resource development within the context of...rate. It has been suggested (Naiman et al . 2002 ) that there are three overarching ecological principles for water resources management. These are...been expanded into another six key principles by Bunn and Arthington ( 2002 ), Nilsson and Svedmark ( 2002 ), and Pinay et al . ( 2002 ): a. Flow is a major

  2. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum

  3. Shape evolution with angular momentum in Lu isotopes

    NASA Astrophysics Data System (ADS)

    Kardan, Azam; Sayyah, Sepideh

    2016-06-01

    The nuclear potential energies of Lu isotopes with neutron number N = 90 - 98 up to high spins are computed within the framework of the unpaired cranked Nilsson-Strutinsky method. The potential and the macroscopic Lublin-Strasbourg drop (LSD) energy-surface diagrams are analyzed in terms of quadrupole deformation and triaxiality parameter. The shape evolution of these isotopes with respect to angular momentum, as well as the neutron number is studied.

  4. Absence of paired crossing in the positive parity bands of 124Cs

    NASA Astrophysics Data System (ADS)

    Singh, A. K.; Basu, A.; Nag, Somnath; Hübel, H.; Domscheit, J.; Ragnarsson, I.; Al-Khatib, A.; Hagemann, G. B.; Herskind, B.; Elema, D. R.; Wilson, J. N.; Clark, R. M.; Cromaz, M.; Fallon, P.; Görgen, A.; Lee, I.-Y.; Ward, D.; Ma, W. C.

    2018-02-01

    High-spin states in 124Cs were populated in the 64Ni(64Ni,p 3 n ) reaction and the Gammasphere detector array was used to measure γ -ray coincidences. Both positive- and negative-parity bands, including bands with chiral configurations, have been extended to higher spin, where a shape change has been observed. The configurations of the bands before and after the alignment are discussed within the framework of the cranked Nilsson-Strutinsky model. The calculations suggest that the nucleus undergoes a shape transition from triaxial to prolate around spin I ≃22 of the positive-parity states. The alignment gain of 8 ℏ , observed in the positive-parity bands, is due to partial alignment of several valence nucleons. This indicates the absence of band crossing due to paired nucleons in the bands.

  5. Superdeformation in the a Approximately 190 Mass Region and Shape Coexistence in LEAD-194

    NASA Astrophysics Data System (ADS)

    Brinkman, Matthew James

    Near-yrast states in ^{194 }Pb have been identified up to a spin of {~}35hbar following the ^{176}Yb(^ {24}Mg,6n)^{194} Pb^{*} reaction at a beam energy of 134 MeV, measured with the High Energy -Resolution Array located at the Lawrence Berkeley Laboratory 88-Inch Cyclotron facility. Eighteen new transitions were placed. Examples of non-collective prolate and oblate and collective oblate excitations are seen. In addition a rotational band consisting of twelve transitions, with energy spacings characteristic of superdeformed shapes, were also seen. These results have been interpreted using both Nilsson model calculations and previously published potential energy surface calculations. The superdeformed bands in the A ~ 190 mass region are discussed with primary emphasis on ten superdeformed bands in ^{192,193,194 }Hg and ^{192,194,196,198 }Pb discovered or codiscovered by our collaboration. The discussion of superdeformation in these nuclei have been broken into three portions, focusing on the population of, the physics associated with, and the depopulation of these bands, respectively. The population behavior of the superdeformed structures is presented, and discussed with respect to theoretical predictions for nuclei near A ~ 190 expected to support superdeformation. A detailed analysis of the population of the ^{193} Hg^{rm 1a} band is provided, and the results are compared with statistical model calculations predictions. Significant differences were found between the population of the superdeformed bands in the A ~ 150 and 190 mass regions. The systematics of the intraband region are presented. Nilsson model calculations are carried out, with nucleon configurations for the primary superdeformed bands proposed. A discussion of possible mechanisms for reproducing the smooth increase in dynamic moments of inertia observed in all superdeformed bands in this mass region is provided. A number of superdeformed bands in the A ~ 190 mass region have transition energies

  6. A microscopic explanation of the isotonic multiplet at N=90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, J. B., E-mail: jbgupta2011@gmail.com

    2014-08-14

    The shape phase transition from spherical to soft deformed at N=88-90 was observed long ago. After the prediction of the X(5) symmetry, for which analytical solution of the nuclear Hamiltonian is given [1], good examples of X(5) nuclei were identified in the N=90 isotones of Nd, Sm, Gd and Dy, in the recent works. The N=90 isotones have almost the similar deformed level structure, forming the isotonic multiplet in Z=50-66, N=82-104 quadrant. This is explained microscopically in terms of the Nilsson level diagram. Using the Dynamic Pairing-Plus-Quadrupole model of Kumar-Baranger, the quadrupole deformation and the occupancies of the neutrons andmore » protons in these nuclei have been calculated, which support the formation of N=88, 90 isotonic multiplets. The existence of F-spin multiplets in Z=66-82, N=82-104 quadrant, identified in earlier works on the Interacting Boson Model, is also explained in our study.« less

  7. An Experimental Characterization of Damping Properties of Thermal Barrier Coatings at Elevated Temperatures

    DTIC Science & Technology

    2011-03-01

    zirconium. For the standard, Brayton open-cycle, gas turbine, typical of modern aircraft power plants, the thermodynamic efficiency is heavily driven by...linearize the radiation emission term around Ti,j0 from a previous the previous step, Taylor expand, and rearrange Eq. (23) in terms of Ti,j to apply as...York: Wiley. 2004. Nilsson, J. W., and Riedel, S. A. Electric Circuits. Prentice Hall. 2007. 512 Noda, N. Thermal Stresses. Taylor & Francis. 2002

  8. Advanced Methods of Approximate Reasoning

    DTIC Science & Technology

    1990-11-30

    about Knowledge and Action. Technical Note 191, Menlo Park, California: SRI International. 1980 . 20 [26] N.J. Nilsson. Probabilistic logic. Artificial...reasoning. Artificial Intelligence, 13:81-132, 1980 . S[30 R. Reiter. On close world data bases. In H. Gallaire and J. Minker, editors, Logic and Data...specially grateful to Dr. Abraham Waksman of the Air Force Office of Scientific Research and Dr. David Hislop of the Army Research Office for their

  9. Collective and non-collective structures in nuclei of mass region A ≈ 125

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, A. K.; Collaboration: INGA Collaboration; Gammasphere Collaboration

    Generation of angular momentum in nuclei is a key question in nuclear structure studies. In single particle model, it is due to alignment of spin of individual nucleon available in the valence space, whereas coherent motion of nucleons are assumed in the collective model. The nuclei near the closed shell at Z = 50 with mass number A ≈ 120-125 represent ideal cases to explore the interplay between these competing mechanisms and the transition from non-collective to collective behavior or vice versa. Recent spectroscopic studies of nuclei in this region reveal several non-collective maximally aligned states representing the first kindmore » of excitation mechanism, where 8-12 particles above the {sup 114}Sn align their spins to generate these states. Deformed rotational bands feeding the non-collective states in the spin range I=20-25 and excitation energies around 10 MeV have also been observed. Structure of the collective and non-collective states are discussed in the framework of Cranked-Nilsson-Strutinsky model.« less

  10. Phase transition at N = 92 in 158Dy

    NASA Astrophysics Data System (ADS)

    Gupta, J. B.

    2016-09-01

    Beyond the shape phase transition from the spherical vibrator to the deformed rotor regime at N = 90, the interplay of β- and γ-degrees of freedom becomes important, which affects the relative positions of the Kπ = 0+β- and Kπ = 2+γ-bands. In the microscopic approach of the dynamic pairing plus quadrupole model, a correlation of the strength of the quadrupole force and the formation of the β- and γ-bands in 158Dy is described. The role of the potential energy surface is illustrated. The E2 transition rates in the lower three K-bands and the multi-phonon bands with Kπ = 0+, 2+ and 4+ are well reproduced. The absolute B(E2, 2i+ = 0 2+) (i = 2, 3) serves as a good measure of the quadrupole strength. The role of the single particle Nilsson orbits is also described.

  11. Screening for and Inheritance of Resistance to Barley Leaf Stripe (Drechslera graminea),

    DTIC Science & Technology

    1987-12-01

    JORGENSEN, J.H. (1986,. Field assessment of partial resistance to powdery mildew in spring barley . Euphytica 35, 233-243. KRISTIANSSON, B. and NILSSON, B...the Laevigatum powdery mildew resistance via ’Vada’ and ’Minerva’. This suggests this resistance to occur in many varieties descending from ’Vada...kept free from powdery mildew by spraying with Bayleton (25% triadimefon WP) both in the greenhouse and in the field. This fungicide does not affect the

  12. DC Characteristics of InAs/AlSb HEMTs at Cryogenic Temperatures

    DTIC Science & Technology

    2009-05-01

    Molecular Beam Epitaxy - MBE XIV, April 2007, Volumes 301- 302, Pages 1025-1029 Fig. 5: SEM image showing the 2x50μm InAs/AlSb HEMT . 325 ...started with a heterostructure grown by molecular beam epitaxy on a semi- insulating InP substrate. The heterostructure is shown in Fig. 1. Mesa isolation...DC characteristics of InAs/AlSb HEMTs at cryogenic temperatures G. Moschetti, P-Å Nilsson, N. Wadefalk, M. Malmkvist, E. Lefebvre, J. Grahn

  13. VizieR Online Data Catalog: SDSS optically selected BL Lac candidates (Kuegler+, 2014)

    NASA Astrophysics Data System (ADS)

    Kuegler, S. D.; Nilsson, K.; Heidt, J.; Esser, J.; Schultz, T.

    2014-11-01

    The data that we use for variability and host galaxy analysis were presented in Paper I (Heidt & Nilsson, 2011A&A...529A.162H, Cat. J/A+A/529/A162). Alltogether, 123 targets were observed at the ESO New Technology Telescope (NTT) on La Silla, Chile during Oct. 2-6, 2008 and Mar. 28-Apr. 1, 2009. The observations were made with the EFOSC2 instrument through a Gunn-r filter (#786). (2 data files).

  14. AFOSR Indo-UK -US Joint Physics Initiative for Study of Angular Optical Mode Fiber Amplification

    DTIC Science & Technology

    2017-02-20

    AFRL -AFOSR-UK-TR-2017-0011 AFOSR Indo-UK -US Joint Physics Initiative for study of angular optical mode fiber amplification Johan Nilsson UNIVERSITY...ES) EOARD Unit 4515 APO AE 09421-4515 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL /AFOSR IOE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) AFRL -AFOSR-UK-TR-2017-0011...this travel, he had the opportunity to visit the Kirtland Air Force Base and interact with Dr Leanne Henry as well as Dr Iyad Dajani to discuss

  15. Defining ecosystem flow requirements for the Bill Williams River, Arizona

    USGS Publications Warehouse

    Shafroth, Patrick B.; Beauchamp, Vanessa B.

    2006-01-01

    Alteration of natural river flows resulting from the construction and operation of dams can result in substantial changes to downstream aquatic and bottomland ecosystems and undermine the long-term health of native species and communities (for general review, cf. Ward and Stanford, 1995; Baron and others, 2002; Nilsson and Svedmark, 2002). Increasingly, land and water managers are seeking ways to manage reservoir releases to produce flow regimes that simultaneously meet human needs and maintain the health and sustainability of downstream biotaa.

  16. Hearing loss is negatively related to episodic and semantic long-term memory but not to short-term memory.

    PubMed

    Rönnberg, Jerker; Danielsson, Henrik; Rudner, Mary; Arlinger, Stig; Sternäng, Ola; Wahlin, Ake; Nilsson, Lars-Göran

    2011-04-01

    To test the relationship between degree of hearing loss and different memory systems in hearing aid users. Structural equation modeling (SEM) was used to study the relationship between auditory and visual acuity and different cognitive and memory functions in an age-hetereogenous subsample of 160 hearing aid users without dementia, drawn from the Swedish prospective cohort aging study known as Betula (L.-G. Nilsson et al., 1997). Hearing loss was selectively and negatively related to episodic and semantic long-term memory (LTM) but not short-term memory (STM) performance. This held true for both ears, even when age was accounted for. Visual acuity alone, or in combination with auditory acuity, did not contribute to any acceptable SEM solution. The overall relationships between hearing loss and memory systems were predicted by the ease of language understanding model (J. Rönnberg, 2003), but the exact mechanisms of episodic memory decline in hearing aid users (i.e., mismatch/disuse, attentional resources, or information degradation) remain open for further experiments. The hearing aid industry should strive to design signal processing algorithms that are cognition friendly.

  17. Heuristics can produce surprisingly rational probability estimates: Comment on Costello and Watts (2014).

    PubMed

    Nilsson, Håkan; Juslin, Peter; Winman, Anders

    2016-01-01

    Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). (c) 2015 APA, all rights reserved).

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.

    In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identifiedmore » using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z=65) studied to date. Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.

    In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identifiedmore » using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr 3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z = 65) studied to date. Here, Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.« less

  20. Isomer spectroscopy of neutron-rich 168Tb 103

    DOE PAGES

    Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.; ...

    2016-12-29

    In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identifiedmore » using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr 3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z = 65) studied to date. Here, Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.« less

  1. Isomer spectroscopy of neutron-rich 168Tb103

    NASA Astrophysics Data System (ADS)

    Gurgi, L. A.; Regan, P. H.; Söderström, P.-A.; Watanabe, H.; Walker, P. M.; Podolyák, Zs.; Nishimura, S.; Berry, T. A.; Doornenbal, P.; Lorusso, G.; Isobe, T.; Baba, H.; Xu, Z. Y.; Sakurai, H.; Sumikama, T.; Catford, W. N.; Bruce, A. M.; Browne, F.; Lane, G. J.; Kondev, F. G.; Odahara, A.; Wu, J.; Liu, H. L.; Xu, F. R.; Korkulu, Z.; Lee, P.; Liu, J. J.; Phong, V. H.; Yag, A.; Zhang, G. X.; Alharbi, T.; Carroll, R. J.; Chae, K. Y.; Dombradi, Zs.; Estrade, A.; Fukuda, N.; Griffin, C.; Ideguchi, E.; Inabe, N.; Kanaoka, H.; Kojouharov, I.; Kubo, T.; Kubono, S.; Kurz, N.; Kuti, I.; Lalkovski, S.; Lee, E. J.; Lee, C. S.; Lotay, G.; Moon, C.-B.; Nishizuka, I.; Nita, C. R.; Patel, Z.; Roberts, O. J.; Schaffner, H.; Shand, C. M.; Suzuki, H.; Takeda, H.; Terashima, S.; Vajta, Zs.; Yoshida, S.; Valiente-Dòbon, J. J.

    2017-11-01

    In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identified using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z=65) studied to date. Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.

  2. Decay properties of Bk24397 and Bk24497

    NASA Astrophysics Data System (ADS)

    Ahmad, I.; Kondev, F. G.; Greene, J. P.; Zhu, S.

    2018-01-01

    Electron capture decays of 243Bk and 244Bk have been studied by measuring the γ -ray spectra of mass-separated sources and level structures of 243Cm and 244Cm have been deduced. In 243Cm, the electron capture population to the ground state, 1 /2+[631 ] , and 1 /2+[620 ] Nilsson states have been observed. The octupole Kπ=2- band was identified in 244Cm at 933.6 keV. In addition, spins and parities were deduced for several other states and two-quasiparticle configurations have been tentatively assigned to them.

  3. An Expert System Framework for Adaptive Evidential Reasoning: Application to In-Flight Route Re-Planning

    DTIC Science & Technology

    1986-03-21

    i t a t i v e frameworks (e.g., Doyle, Toulmin , P . Cohen), and e f f o r t s t o syn thes i ze l o g i c and p r o b a b i l i t y (Nilsson...logic allows for provisional acceptance of uncer- tain premises, which may later be retracted when they lead to contradictory conclusions. Toulmin (1958...A1 researchers] have accepted without hesitation as impeccable." * The basic framework of an argument, according to Toulmin , is as follows ( Toulmin

  4. Decay properties of Bk 97 243 and Bk 97 244

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, I.; Kondev, F. G.; Greene, J. P.

    2018-01-01

    Electron capture decays of Bk-243 and Bk-244 have been studied by measuring the gamma-ray spectra of mass-separated sources and level structures of Cm-243 and Cm-244 have been deduced. In Cm-243, the electron capture population to the ground state, 1/2(+)[631], and 1/2(+)[620] Nilsson states have been observed. The octupole K-pi = 2(-) band was identified in Cm-244 at 933.6 keV. In addition, spins and parities were deduced for several other states and two-quasiparticle configurations have been tentatively assigned to them

  5. Joint analysis of ESR lineshapes and 1H NMRD profiles of DOTA-Gd derivatives by means of the slow motion theory

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Kowalewski, J.; Tipikin, D. S.; Freed, J. H.; Mościcki, M.; Mielczarek, A.; Port, M.

    2011-01-01

    The "Swedish slow motion theory" [Nilsson and Kowalewski, J. Magn. Reson. 146, 345 (2000)] applied so far to Nuclear Magnetic Relaxation Dispersion (NMRD) profiles for solutions of transition metal ion complexes has been extended to ESR spectral analysis, including in addition g-tensor anisotropy effects. The extended theory has been applied to interpret in a consistent way (within one set of parameters) NMRD profiles and ESR spectra at 95 and 237 GHz for two Gd(III) complexes denoted as P760 and P792 (hydrophilic derivatives of DOTA-Gd, with molecular masses of 5.6 and 6.5 kDa, respectively). The goal is to verify the applicability of the commonly used pseudorotational model of the transient zero field splitting (ZFS). According to this model the transient ZFS is described by a tensor of a constant amplitude, defined in its own principal axes system, which changes its orientation with respect to the laboratory frame according to the isotropic diffusion equation with a characteristic time constant (correlation time) reflecting the time scale of the distortional motion. This unified interpretation of the ESR and NMRD leads to reasonable agreement with the experimental data, indicating that the pseudorotational model indeed captures the essential features of the electron spin dynamics.

  6. Skyrme RPA description of γ-vibrational states in rare-earth nuclei

    NASA Astrophysics Data System (ADS)

    Nesterenko, V. O.; Kartavenko, V. G.; Kleinig, W.; Kvasil, J.; Repko, A.; Jolos, R. V.; Reinhard, P.-G.

    2016-01-01

    The lowest γ-vibrational states with Kπ = 2+γ in well-deformed Dy, Er and Yb isotopes are investigated within the self-consistent separable quasiparticle random-phase-approximation (QRPA) approach based on the Skyrme functional. The energies Eγ and reduced transition probabilities B(E2)γ of the states are calculated with the Skyrme force SV-mas10. We demonstrate the strong effect of the pairing blocking on the energies of γ-vibrational states. It is also shown that collectivity of γ-vibrational states is strictly determined by keeping the Nilsson selection rules in the corresponding lowest 2qp configurations.

  7. Body mass index and its relation to neuropsychological functioning and brain volume in healthy older adults.

    PubMed

    Gogniat, Marissa Ann; Robinson, Talia Loren; Mewborn, Catherine Mattocks; Jean, Kharine Renee; Miller, L Stephen

    2018-04-22

    Obesity is a growing concern worldwide because of its adverse health effects, including its negative impact on cognitive functioning. This concern is especially relevant for older adults, who are already likely to experience some cognitive decline and loss of brain volume due to aging, (Gea et al., 2002). However, there is some evidence that higher body mass index (BMI) may actually be protective in later life (Hughes et al., 2009; Luchsinger et al., 2007; Nilsson and Nilsson, 2009; Sturman et al., 2008). Therefore, the purpose of the current study was to assess the relationship between BMI and neuropsychological functioning in older adults, and concurrently the relationship between BMI and brain volume. Older adults (N = 88) reported height and weight to determine BMI (M = 26.5) based on Centers for Disease Control and Prevention (CDC) guidelines. Cognitive function was assessed with the Repeatable Battery for Assessment of Neuropsychological Status (RBANS). Brain volume measurements were evaluated via structural MRI. Results indicated no association between BMI and neuropsychological functioning. There was a significant association between BMI and total grey matter volume while controlling for age and years of education (β = 0.208, p = .026, ΔR 2  = 0.043), indicating that as BMI increased, brain volume in these areas modestly increased. However, these results did not survive multiple comparison corrections and were further attenuated to near significance when sex was explicitly added as an additional covariate. Nevertheless, while replication is clearly needed, these results suggest that moderately greater BMI in later life may modestly attenuate concomitant grey matter volume decline. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Inorganic Arsenic–Related Changes in the Stromal Tumor Microenvironment in a Prostate Cancer Cell–Conditioned Media Model

    PubMed Central

    Shearer, Joseph J.; Wold, Eric A.; Umbaugh, Charles S.; Lichti, Cheryl F.; Nilsson, Carol L.; Figueiredo, Marxa L.

    2015-01-01

    Background: The tumor microenvironment plays an important role in the progression of cancer by mediating stromal–epithelial paracrine signaling, which can aberrantly modulate cellular proliferation and tumorigenesis. Exposure to environmental toxicants, such as inorganic arsenic (iAs), has also been implicated in the progression of prostate cancer. Objective: The role of iAs exposure in stromal signaling in the tumor microenvironment has been largely unexplored. Our objective was to elucidate molecular mechanisms of iAs-induced changes to stromal signaling by an enriched prostate tumor microenvironment cell population, adipose-derived mesenchymal stem/stromal cells (ASCs). Results: ASC-conditioned media (CM) collected after 1 week of iAs exposure increased prostate cancer cell viability, whereas CM from ASCs that received no iAs exposure decreased cell viability. Cytokine array analysis suggested changes to cytokine signaling associated with iAs exposure. Subsequent proteomic analysis suggested a concentration-dependent alteration to the HMOX1/THBS1/TGFβ signaling pathway by iAs. These results were validated by quantitative reverse transcriptase–polymerase chain reaction (RT-PCR) and Western blotting, confirming a concentration-dependent increase in HMOX1 and a decrease in THBS1 expression in ASC following iAs exposure. Subsequently, we used a TGFβ pathway reporter construct to confirm a decrease in stromal TGFβ signaling in ASC following iAs exposure. Conclusions: Our results suggest a concentration-dependent alteration of stromal signaling: specifically, attenuation of stromal-mediated TGFβ signaling following exposure to iAs. Our results indicate iAs may enhance prostate cancer cell viability through a previously unreported stromal-based mechanism. These findings indicate that the stroma may mediate the effects of iAs in tumor progression, which may have future therapeutic implications. Citation: Shearer JJ, Wold EA, Umbaugh CS, Lichti CF, Nilsson CL

  9. Evidence of nontermination of collective rotation near the maximum angular momentum in Rb75

    NASA Astrophysics Data System (ADS)

    Davies, P. J.; Afanasjev, A. V.; Wadsworth, R.; Andreoiu, C.; Austin, R. A. E.; Carpenter, M. P.; Dashdorj, D.; Finlay, P.; Freeman, S. J.; Garrett, P. E.; Görgen, A.; Greene, J.; Grinyer, G. F.; Hyland, B.; Jenkins, D. G.; Johnston-Theasby, F. L.; Joshi, P.; Macchiavelli, A. O.; Moore, F.; Mukherjee, G.; Phillips, A. A.; Reviol, W.; Sarantites, D.; Schumaker, M. A.; Seweryniak, D.; Smith, M. B.; Svensson, C. E.; Valiente-Dobon, J. J.; Ward, D.

    2010-12-01

    Two of the four known rotational bands in Rb75 were studied via the Ca40(Ca40,αp)Rb75 reaction at a beam energy of 165 MeV. Transitions were observed up to the maximum spin Imax of the assigned configuration in one case and one-transition short of Imax in the other. Lifetimes were determined using the residual Doppler shift attenuation method. The deduced transition quadrupole moments show a small decrease with increasing spin, but remain large at the highest spins. The results obtained are in good agreement with cranked Nilsson-Strutinsky calculations, which indicate that these rotational bands do not terminate, but remain collective at Imax.

  10. N=151Pu,Cm and Cf nuclei under rotational stress: Role of higher-order deformations

    DOE PAGES

    Hota, S. S.; Chowdhury, P.; Khoo, T. L.; ...

    2014-10-18

    The fast-rotating N=151 isotones 245Pu, 247Cm and 249Cf have been studied through inelastic excitation and transfer reactions with radioactive targets. While all have a ground-state band built on a νj 15/2[734]9/2 - Nilsson configuration, new excited bands have also been observed in each isotone. These odd-N excited bands allow a comparison of the alignment behavior for two different configurations, where the νj 15/2 alignment is either blocked or allowed. The effect of higher order deformations is explored through cranking calculations, which help clarify the elusive nature of νj 15/2 alignments.

  11. Potential responses of riparian vegetation to dam removal

    USGS Publications Warehouse

    Shafroth, P.B.; Friedman, J.M.; Auble, G.T.; Scott, M.L.; Braatne, J.H.

    2002-01-01

    Throughout the world, riparian habitats have been dramatically modified from their natural condition. Dams are one of the principal causes of these changes, because of their alteration of water and sediment regimes (Nilsson and Berggren 2000). Because of the array of ecological goods and services provided by natural riparian ecosystems (Naiman and Decamps 1997), their conservation and restoration have become the focus of many land and water managers. Efforts to restore riparian habitats and other riverine ecosystems have included the management of flow releases downstream of dams to more closely mimic natural flows (Poff et al. 1997), but dam removal has received little attention as a possible approach to riparian restoration.

  12. CMCSN: Structure and dynamics of water and aqueous solutions in materials science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Car, Roberto; Galli, Giulia; Rehr, John J.

    This award has contributed to build a network of scientists interested in the structure and dynamics of water. Such network extends well beyond the PI and the co-PIs and includes both theoreticians and experimentalists. Scientific interactions within this community have been fostered by three workshops supported by the grant. The first workshop was held at Princeton University on December 6-8, 2010. The second workshop was held at the Talaris Conference Center in Seattle on February 10-12, 2012. The third workshop was held at UC Davis on June 19-22, 2013. Each workshop had 40-50 participants and about 20 speakers. The workshopsmore » have been very successful and stimulated ongoing discussions within the water community. This debate is lasting beyond the time frame set by the grant. The following events are just a few examples: (i) the month long activity on "Water: the most anomalous liquid" organized at NORDITA (Stockholm) in October- November 2014 by A. Nilsson and L. Petterson who participated in all the three CMCSN sponsored workshops; (ii) the workshop on "ice nucleation" organized by R. Car, P. Debenedetti and F. Stillinger at the Princeton Center for Theoretical Science in April 23- 24 2015; (iii) the 10 days workshop on water organized by R. Car and F. Mallamace at the E. Maiorana Centre in Erice (Sicily) in July 2016, an activity that will morph into a regular summer school of the E. Maiorana Centre in the years to come under the directorship of R. Car, F. Mallamace (U. Messina), A. Nilsson (U. Stockholm) and L. Xu (Beijing U.). All these activities were stimulated by the scientific discussions within the network initiated by this CMCSN grant.« less

  13. High-spin terminating states in the N = 88 Ho 155 and Er 156 isotones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rees, J. M.; Paul, E. S.; Simpson, J.

    2015-05-01

    The Sn-124(Cl-37, 6n gamma) fusion-evaporation reaction at a bombarding energy of 180 MeV has been used to significantly extend the excitation level scheme of Ho-155(67)88. The collective rotational behavior of this nucleus breaks down above spin I similar to 30 and a fully aligned noncollective (band terminating) state has been identified at I-pi = 79/2(-). Comparison with cranked Nilsson-Strutinsky calculations also provides evidence for core-excited noncollective states at I-pi = 87/2(-) and (89/2(+)) involving particle-hole excitations across the Z = 64 shell gap. A similar core-excited state in Er-156(68)88 at I-pi = (46(+)) is also presented.

  14. Triaxial-band structures, chirality, and magnetic rotation in La 133

    DOE PAGES

    Petrache, C. M.; Chen, Q. B.; Guo, S.; ...

    2016-12-05

    The structure of 133La has been investigated using the 116Cd( 22Ne,4pn) reaction and the Gammasphere array. Three new bands of quadrupole transitions and one band of dipole transitions are identified and the previously reported level scheme is revised and extended to higher spins. The observed structures are discussed using the cranked Nilsson-Strutinsky formalism, covariant density functional theory, and the particle-rotor model. Triaxial configurations are assigned to all observed bands. For the high-spin bands it is found that rotations around different axes can occur, depending on the configuration. The orientation of the angular momenta of the core and of themore » active particles is investigated, suggesting chiral rotation for two nearly degenerate dipole bands and magnetic rotation for one dipole band. As a result, it is shown that the h 11/2 neutron holes present in the configuration of the nearly degenerate dipole bands have significant angular momentum components not only along the long axis but also along the short axis, contributing to the balance of the angular momentum components along the short and long axes and thus giving rise to a chiral geometry.« less

  15. Triaxiality and Exotic Rotations at High Spins in 134Ce

    DOE PAGES

    Petrache, C. M.; Guo, S.; Ayangeakaa, A. D.; ...

    2016-06-06

    High-spin states in Ce-134 have been investigated using the Cd-116(Ne-22,4n) reaction and the Gammasphere array. The level scheme has been extended to an excitation energy of similar to 30 MeV and spin similar to 54 (h) over bar. Two new dipole bands and four new sequences of quadrupole transitions were identified. Several new transitions have been added to a number of known bands. One of the strongly populated dipole bands was revised and placed differently in the level scheme, resolving a discrepancy between experiment and model calculations reported previously. Configurations are assigned to the observed bands based on cranked Nilsson-Strutinskymore » calculations. A coherent understanding of the various excitations, both at low and high spins, is thus obtained, supporting an interpretation in terms of coexistence of stable triaxial, highly deformed, and superdeformed shapes up to very high spins. Rotations around different axes of the triaxial nucleus, and sudden changes of the rotation axis in specific configurations, are identified, further elucidating the nature of high-spin collective excitations in the A = 130 mass region.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrache, C. M.; Chen, Q. B.; Guo, S.

    The structure of 133La has been investigated using the 116Cd( 22Ne,4pn) reaction and the Gammasphere array. Three new bands of quadrupole transitions and one band of dipole transitions are identified and the previously reported level scheme is revised and extended to higher spins. The observed structures are discussed using the cranked Nilsson-Strutinsky formalism, covariant density functional theory, and the particle-rotor model. Triaxial configurations are assigned to all observed bands. For the high-spin bands it is found that rotations around different axes can occur, depending on the configuration. The orientation of the angular momenta of the core and of themore » active particles is investigated, suggesting chiral rotation for two nearly degenerate dipole bands and magnetic rotation for one dipole band. As a result, it is shown that the h 11/2 neutron holes present in the configuration of the nearly degenerate dipole bands have significant angular momentum components not only along the long axis but also along the short axis, contributing to the balance of the angular momentum components along the short and long axes and thus giving rise to a chiral geometry.« less

  17. Seven-quasiparticle bands in Ce139

    NASA Astrophysics Data System (ADS)

    Chanda, Somen; Bhattacharjee, Tumpa; Bhattacharyya, Sarmishtha; Mukherjee, Anjali; Basu, Swapan Kumar; Ragnarsson, I.; Bhowmik, R. K.; Muralithar, S.; Singh, R. P.; Ghugre, S. S.; Pramanik, U. Datta

    2009-05-01

    The high spin states in the Ce139 nucleus have been studied by in-beam γ-spectroscopic techniques using the reaction Te130(C12,3n)Ce139 at Ebeam=65 MeV. A gamma detector array, consisting of five Compton-suppressed Clover detectors was used for coincidence measurements. 15 new levels have been proposed and 28 new γ transitions have been assigned to Ce139 on the basis of γγ coincidence data. The level scheme of Ce139 has been extended above the known 70 ns (19)/(2)- isomer up to ~6.1 MeV in excitation energy and (35)/(2)ℏ in spin. The spin-parity assignments for most of the newly proposed levels have been made using the deduced Directional Correlation from Oriented states of nuclei (DCO ratio) and the Polarization Directional Correlation from Oriented states (PDCO ratio) for the de-exciting transitions. The observed level structure has been compared with a large basis shell model calculation and also with the predictions from cranked Nilsson-Strutinsky (CNS) calculations. A general consistency has been observed between these two different theoretical approaches.

  18. A High-resolution Palaeomagnetic Secular Variation Record from the Chukchi Sea, Arctic Ocean for the Last 4200 Years

    NASA Astrophysics Data System (ADS)

    West, G.; O'Regan, M.; Jakobsson, M.; Nilsson, A.; Pearce, C.; Snowball, I.; Wiers, S.

    2017-12-01

    The lack of high-temporal resolution and well-dated palaeomagnetic records from the Arctic Ocean hinders our understanding of geomagnetic field behaviour in the region, and limits the applicability of these records in the development of accurate age models for Arctic Ocean sediments. We present a palaeomagnetic secular variation (PSV) record from a sediment core recovered from the Chukchi Sea, Arctic Ocean during the SWERUS-C3 Leg 2 Expedition. The 8.24-metre-long core was collected at 57 m water depth in the Herald Canyon (72.52° N 175.32° W), and extends to 4200 years BP based on 14 AMS 14C dates and a tephra layer associated with the 3.6 cal ka BP Aniakchak eruption. Palaeomagnetic measurements and magnetic analyses of discrete samples reveal stable characteristic remanent magnetisation directions, and a magnetic mineralogy dominated by magnetite. Centennial to millennial scale declination and inclination features, which correlate well to other Western Arctic records, can be readily identified. The relative palaeointensity record of the core matches well with spherical harmonic field model outputs of pfm9k (Nilsson et al., 2014) and CALS10k.2 (Constable et al. 2016) for the site location. Supported by a robust chronology, the presented high-resolution PSV record can potentially play a key role in constructing a well-dated master chronology for the region.

  19. New method to assess the water vapour permeance of wound coverings.

    PubMed

    Jonkman, M F; Molenaar, I; Nieuwenhuis, P; Bruin, P; Pennings, A J

    1988-05-01

    A new method for assessing the permeability to water vapour of wound coverings is presented, using the evaporimeter developed by Nilsson. This new method combines the water vapour transmission rate (WVTR) and the vapour pressure difference across a wound covering in one absolute measure: the water vapour permeance (WVP). The WVP of a wound covering is the steady flow (g) of water vapour per unit (m2) area of surface in unit (h) time induced by unit (kPa) vapour pressure difference, g.m-2.h-1.kPa-1. Since the WVP of a wound covering is a more accurate measure for the permeability than the WVTR is, it facilitates the prediction of the water exchange of a wound covering in clinical situations.

  20. Consistent Pauli reduction on group manifolds

    DOE PAGES

    Baguet, A.; Pope, Christopher N.; Samtleben, H.

    2016-01-01

    We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baguet, A.; Pope, Christopher N.; Samtleben, H.

    We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less

  2. Description of rotating N=Z nuclei in terms of isovector pairing

    NASA Astrophysics Data System (ADS)

    Afanasjev, A. V.; Frauendorf, S.

    2005-06-01

    A systematic investigation of the rotating N=Z even-even nuclei in the mass A=68-80 region has been performed within the frameworks of the cranked relativistic mean field, cranked relativistic Hartree-Bogoliubov theories, and cranked Nilsson-Strutinsky approach. Most of the experimental data are well accounted for in the calculations. The present study suggests the presence of strong isovector np pair field at low spin, whose strength is defined by the isospin symmetry. At high spin, the isovector pair field is destroyed and the data are well described by the calculations assuming zero pairing. No clear evidence for the existence of the isoscalar t=0 np pairing has been obtained in the present investigation performed at the mean field level.

  3. Bioreactors for Tissue Engineering of Cartilage

    NASA Astrophysics Data System (ADS)

    Concaro, S.; Gustavson, F.; Gatenholm, P.

    The cartilage regenerative medicine field has evolved during the last decades. The first-generation technology, autologous chondrocyte transplantation (ACT) involved the transplantation of in vitro expanded chondrocytes to cartilage defects. The second generation involves the seeding of chondrocytes in a three-dimensional scaffold. The technique has several potential advantages such as the ability of arthroscopic implantation, in vitro pre-differentiation of cells and implant stability among others (Brittberg M, Lindahl A, Nilsson A, Ohlsson C, Isaksson O, Peterson L, N Engl J Med 331(14):889-895, 1994; Henderson I, Francisco R, Oakes B, Cameron J, Knee 12(3):209-216, 2005; Peterson L, Minas T, Brittberg M, Nilsson A, Sjogren-Jansson E, Lindahl A, Clin Orthop (374):212-234, 2000; Nagel-Heyer S, Goepfert C, Feyerabend F, Petersen JP, Adamietz P, Meenen NM, et al. Bioprocess Biosyst Eng 27(4):273-280, 2005; Portner R, Nagel-Heyer S, Goepfert C, Adamietz P, Meenen NM, J Biosci Bioeng 100(3):235-245, 2005; Nagel-Heyer S, Goepfert C, Adamietz P, Meenen NM, Portner R, J Biotechnol 121(4):486-497, 2006; Heyland J, Wiegandt K, Goepfert C, Nagel-Heyer S, Ilinich E, Schumacher U, et al. Biotechnol Lett 28(20):1641-1648, 2006). The nutritional requirements of cells that are synthesizing extra-cellular matrix increase along the differentiation process. The mass transfer must be increased according to the tissue properties. Bioreactors represent an attractive tool to accelerate the biochemical and mechanical properties of the engineered tissues providing adequate mass transfer and physical stimuli. Different reactor systems have been [5] developed during the last decades based on different physical stimulation concepts. Static and dynamic compression, confined and nonconfined compression-based reactors have been described in this review. Perfusion systems represent an attractive way of culturing constructs under dynamic conditions. Several groups showed increased matrix

  4. Photodisintegration cross section of the reaction 4He(γ,n)3He at the giant dipole resonance peak

    NASA Astrophysics Data System (ADS)

    Tornow, W.; Kelley, J. H.; Raut, R.; Rusev, G.; Tonchev, A. P.; Ahmed, M. W.; Crowell, A. S.; Stave, S. C.

    2012-06-01

    The photodisintegration cross section of 4He into a neutron and helion was measured at incident photon energies of 27.0, 27.5, and 28.0 MeV. A high-pressure 4He-Xe gas scintillator served as target and detector while a pure Xe gas scintillator was used for background measurements. A NaI detector in combination with the standard HIγS scintillator paddle system was employed for absolute photon-flux determination. Our data are in good agreement with the theoretical prediction of the Trento group and the recent data of Nilsson [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.75.014007 75, 014007 (2007)] but deviate considerably from the high-precision data of Shima [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.72.044004 72, 044004 (2005)].

  5. Isomer-delayed γ -ray spectroscopy of A =159 -164 midshell nuclei and the variation of K -forbidden E 1 transition hindrance factors

    NASA Astrophysics Data System (ADS)

    Patel, Z.; Walker, P. M.; Podolyák, Zs.; Regan, P. H.; Berry, T. A.; Söderström, P.-A.; Watanabe, H.; Ideguchi, E.; Simpson, G. S.; Nishimura, S.; Wu, Q.; Xu, F. R.; Browne, F.; Doornenbal, P.; Lorusso, G.; Rice, S.; Sinclair, L.; Sumikama, T.; Wu, J.; Xu, Z. Y.; Aoi, N.; Baba, H.; Bello Garrote, F. L.; Benzoni, G.; Daido, R.; Dombrádi, Zs.; Fang, Y.; Fukuda, N.; Gey, G.; Go, S.; Gottardo, A.; Inabe, N.; Isobe, T.; Kameda, D.; Kobayashi, K.; Kobayashi, M.; Komatsubara, T.; Kojouharov, I.; Kubo, T.; Kurz, N.; Kuti, I.; Li, Z.; Matsushita, M.; Michimasa, S.; Moon, C.-B.; Nishibata, H.; Nishizuka, I.; Odahara, A.; Şahin, E.; Sakurai, H.; Schaffner, H.; Suzuki, H.; Takeda, H.; Tanaka, M.; Taprogge, J.; Vajta, Zs.; Yagi, A.; Yokoyama, R.

    2017-09-01

    Excited states have been studied in 159Sm, 161Sm, 162Sm (Z =62 ), 163Eu (Z =63 ), and 164Gd (Z =64 ), populated by isomeric decay following 238U projectile fission at RIBF, RIKEN. The isomer half-lives range from 50 ns to 2.6 μ s . In comparison with other published data, revised interpretations are proposed for 159Sm and 163Eu. The first data for excited states in 161Sm are presented, where a 2.6-μ s isomer is assigned a three-quasiparticle, Kπ=17 /2- structure. The interpretation is supported by multi-quasiparticle Nilsson-BCS calculations, including the blocking of pairing correlations. A consistent set of reduced E 1 hindrance factors is obtained. Limited evidence is also reported for isomeric decay in 163Sm, 164Eu, and 165Eu.

  6. Fission barriers at the end of the chart of the nuclides

    NASA Astrophysics Data System (ADS)

    Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; Iwamoto, Akira; Mumpower, Matthew

    2015-02-01

    We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤A ≤330 . The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop model with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than 5 000 000 different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ɛ ) and the spherical-harmonic (β ) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ɛ ,γ ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about 1 MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β -delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. These studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.

  7. Seven-quasiparticle bands in {sup 139}Ce

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chanda, Somen; Bhattacharjee, Tumpa; Bhattacharyya, Sarmishtha

    2009-05-15

    The high spin states in the {sup 139}Ce nucleus have been studied by in-beam {gamma}-spectroscopic techniques using the reaction {sup 130}Te({sup 12}C,3n){sup 139}Ce at E{sub beam}=65 MeV. A gamma detector array, consisting of five Compton-suppressed Clover detectors was used for coincidence measurements. 15 new levels have been proposed and 28 new {gamma} transitions have been assigned to {sup 139}Ce on the basis of {gamma}{gamma} coincidence data. The level scheme of {sup 139}Ce has been extended above the known 70 ns (19/2){sup -} isomer up to {approx}6.1 MeV in excitation energy and (35/2)({Dirac_h}/2{pi}) in spin. The spin-parity assignments for most ofmore » the newly proposed levels have been made using the deduced Directional Correlation from Oriented states of nuclei (DCO ratio) and the Polarization Directional Correlation from Oriented states (PDCO ratio) for the de-exciting transitions. The observed level structure has been compared with a large basis shell model calculation and also with the predictions from cranked Nilsson-Strutinsky (CNS) calculations. A general consistency has been observed between these two different theoretical approaches.« less

  8. In-beam spectroscopy of medium- and high-spin states in Ce 133

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayangeakaa, A. D.; Garg, U.; Petrache, C. M.

    2016-05-01

    Medium and high-spin states in Ce-133 were investigated using the Cd-116(Ne-22, 5n) reaction and the Gammasphere array. The level scheme was extended up to an excitation energy of similar to 22.8 MeV and spin 93/2 (h) over bar. Eleven bands of quadrupole transitions and two new dipole bands are identified. The connections to low-lying states of the previously known, high-spin triaxial bands were firmly established, thus fixing the excitation energy and, in many cases, the spin parity of the levels. Based on comparisons with cranked Nilsson-Strutinsky calculations and tilted axis cranking covariant density functional theory, it is shown that allmore » observed bands are characterized by pronounced triaxiality. Competing multiquasiparticle configurations are found to contribute to a rich variety of collective phenomena in this nucleus.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, Kristin H.; Wahl, Karen L.

    The concept of rapid microorganism identification using matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) dates back to the mid-1990’s. Prior to 1998, researchers relied on visual inspection in an effort to demonstrate feasibility of MALDI-MS for bacterial identification (Holland, Wilkes et al. 1996), (Krishnamurthy and Ross 1996), (Claydon, Davey et al. 1996). In general, researchers in these early studies visually compared the biomarker intensity profiles between different organisms and between replicates of the same organism to show that MALDI signatures are unique and reproducible. Manual tabulation and comparison of potential biomarker mass values observed for different organisms was used by numerousmore » researchers to qualitatively characterize microorganisms using MALDI-MS spectra (e.g. (Lynn, Chung et al. 1999), (Winkler, Uher et al. 1999), (Ryzhov, Hathout et al. 2000), (Nilsson 1999)).« less

  10. [Characteristics of the genetic structure of parasite and host populations by the example of helminthes from moor frog Rana arvalis Nilsson].

    PubMed

    Zhigalev, O N

    2010-01-01

    The genetic structure of populations of four helminth species from moor frog Rana arvalis, in comparison with the population-genetic structure of the host, has been studied with the gel-electrophoresis method. As compared with the host, parasites are characterized by more distinct deviation from the balance of genotypic frequencies and higher level of interpopulation genetic differences. The genetic variability indices in the three of four frog helminthes examined are lower than those in the host. Moreover, these indices are lower than the average indices typical of free-living invertebrates; this fact contradicts the opinion on polyhostality of these helminthes and their wide distribution.

  11. Fission barriers at the end of the chart of the nuclides

    DOE PAGES

    Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; ...

    2015-02-12

    We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi

    We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less

  13. Evolution of the ATLAS PanDA Production and Distributed Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maeno, T.; De, K.; Wenaus, T.

    2012-12-13

    Evolution of the ATLAS PanDA Production and Distributed Analysis System T Maeno1,5, K De2, T Wenaus1, P Nilsson2, R Walker3, A Stradling2, V Fine1, M Potekhin1, S Panitkin1 and G Compostella4 Published under licence by IOP Publishing Ltd Journal of Physics: Conference Series, Volume 396, Part 3 Article PDF References Citations Metrics 101 Total downloads Cited by 8 articles Turn on MathJax Share this article Article information Abstract The PanDA (Production and Distributed Analysis) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at LHC data processing scale. PanDAmore » has performed well with high reliability and robustness during the two years of LHC data-taking, while being actively evolved to meet the rapidly changing requirements for analysis use cases. We will present an overview of system evolution including automatic rebrokerage and reattempt for analysis jobs, adaptation for the CernVM File System, support for the multi-cloud model through which Tier-2 sites act as members of multiple clouds, pledged resource management and preferential brokerage, and monitoring improvements. We will also describe results from the analysis of two years of PanDA usage statistics, current issues, and plans for the future.« less

  14. Simple Interpretation of Proton-Neutron Interactions in Rare Earth Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oktem, Y.; Cakirli, R. B.; Wright Nuclear Structure Laboratory, Yale University, New Haven, CT 06520

    2007-04-23

    Empirical values of the average interactions of the last two protons and last two neutrons, {delta}Vpn, which can be obtained from double differences of binding energies, provide significant information about nuclear structure. Studies of {delta}Vpn showed striking behavior across major shell gaps and the relation of proton-neutron (p-n) interaction strengths to the increasing collectivity and onset of deformation in nuclei. Here we focus on the strong regularity at the {delta}Vpn values in A{approx}150-180 mass region. Experimentally, for each nucleus, the valence p-n interaction strengths increase systematically against the neutron number and it decreases for the observed last neutron number. Thesemore » experimental results give almost nearly perfect parallel trajectories. A microscopic interpretation with a zero range {delta}-interaction in a Nilsson basis gives reasonable agreement for Er-W but more significant discrepancies appear for Gd and Dy.« less

  15. Nuclear Structure in China 2010

    NASA Astrophysics Data System (ADS)

    Bai, Hong-Bo; Meng, Jie; Zhao, En-Guang; Zhou, Shan-Gui

    2011-08-01

    Personal view on nuclear physics research / Jie Meng -- High-spin level structures in [symbol]Zr / X. P. Cao ... [et al.] -- Constraining the symmetry energy from the neutron skin thickness of tin isotopes / Lie-Wen Chen ... [et al.] -- Wobbling rotation in atomic nuclei / Y. S. Chen and Zao-Chun Gao -- The mixing of scalar mesons and the possible nonstrange dibaryons / L. R. Dai ... [et al.] -- Net baryon productions and gluon saturation in the SPS, RHIC and LHC energy regions / Sheng-Qin Feng -- Production of heavy isotopes with collisions between two actinide nuclides / Z. Q. Feng ... [et al.] -- The projected configuration interaction method / Zao-Chun Gao and Yong-Shou Chen -- Applications of Nilsson mean-field plus extended pairing model to rare-earth nuclei / Xin Guan ... [et al.] -- Complex scaling method and the resonant states / Jian-You Guo ... [et al.] -- Probing the equation of state by deep sub-barrier fusion reactions / Hong-Jun Hao and Jun-Long Tian -- Doublet structure study in A[symbol]105 mass region / C. Y. He ... [et al.] -- Rotational bands in transfermium nuclei / X. T. He -- Shape coexistence and shape evolution [symbol]Yb / H. Hua ... [et al.] -- Multistep shell model method in the complex energy plane / R. J. Liotta -- The evolution of protoneutron stars with kaon condensate / Ang Li -- High spin structures in the [symbol]Lu nucleus / Li Cong-Bo ... [et al.] -- Nuclear stopping and equation of state / QingFeng Li and Ying Yuan -- Covariant description of the low-lying states in neutron-deficient Kr isotopes / Z. X. Li ... [et al.] -- Isospin corrections for superallowed [symbol] transitions / HaoZhao Liang ... [et al.] -- The positive-parity band structures in [symbol]Ag / C. Liu ... [et al.] -- New band structures in odd-odd [symbol]I and [symbol]I / Liu GongYe ... [et al.] -- The sd-pair shell model and interacting boson model / Yan-An Luo ... [et al.] -- Cross-section distributions of fragments in the calcium isotopes projectile

  16. Single-particle and collective motion in unbound deformed 39Mg

    NASA Astrophysics Data System (ADS)

    Fossez, K.; Rotureau, J.; Michel, N.; Liu, Quan; Nazarewicz, W.

    2016-11-01

    Background: Deformed neutron-rich magnesium isotopes constitute a fascinating territory where the interplay between collective rotation and single-particle motion is strongly affected by the neutron continuum. The unbound f p -shell nucleus 39Mg is an ideal candidate to study this interplay. Purpose: In this work, we predict the properties of low-lying resonant states of 39Mg, using a suite of realistic theoretical approaches rooted in the open quantum system framework. Method: To describe the spectrum and decay modes of 39Mg we use the conventional shell model, Gamow shell model, resonating group method, density matrix renormalization group method, and the nonadiabatic particle-plus-rotor model formulated in the Berggren basis. Results: The unbound ground state of 39Mg is predicted to be either a Jπ=7/2 - state or a 3/2 - state. A narrow Jπ=7/2 - ground-state candidate exhibits a resonant structure reminiscent of that of its one-neutron halo neighbor 37Mg, which is dominated by the f7 /2 partial wave at short distances and a p3 /2 component at large distances. A Jπ=3/2 - ground-state candidate is favored by the large deformation of the system. It can be associated with the 1/2 -[321 ] Nilsson orbital dominated by the ℓ =1 wave; hence its predicted width is large. The excited Jπ=1/2 - and 5 /2- states are expected to be broad resonances, while the Jπ=9/2 - and 11/2 - members of the ground-state rotational band are predicted to have very small neutron decay widths. Conclusion: We demonstrate that the subtle interplay between deformation, shell structure, and continuum coupling can result in a variety of excitations in an unbound nucleus just outside the neutron drip line.

  17. New species and host plants of Anastrepha (Diptera: Tephritidae) primarily from Peru and Bolivia.

    PubMed

    Norrbom, Allen L; Rodriguez, Erick J; Steck, Gary J; Sutton, Bruce A; Nolazco, Norma

    2015-11-16

    Twenty-eight new species of Anastrepha are described and illustrated: A. acca (Bolivia, Peru), A. adami (Peru), A. amplidentata (Bolivia, Peru), A. annonae (Peru), A. breviapex (Peru), A. caballeroi (Peru), A. camba (Bolivia, Peru), A. cicra (Bolivia, Peru), A. disjuncta (Peru), A. durantae (Peru), A. echaratiensis (Peru), A. eminens (Peru), A. ericki (Peru), A. gonzalezi (Bolivia, Peru), A. guevarai (Peru), A. gusi (Peru), A. kimi (Colombia, Peru), A. korytkowskii (Bolivia, Peru), A. latilanceola (Bolivia, Peru), A. melanoptera (Peru), A. mollyae (Bolivia, Peru), A. perezi (Peru), A. psidivora (Peru), A. robynae (Peru), A. rondoniensis (Brazil, Peru), A. tunariensis (Bolivia, Peru), A. villosa (Bolivia), and A. zacharyi (Peru). The following host plant records are reported: A. amplidentata from Spondias mombin L. (Anacardiaceae); A. caballeroi from Quararibea malacocalyx A. Robyns & S. Nilsson (Malvaceae); A. annonae from Annona mucosa Jacq. and Annona sp. (Annonaceae); A. durantae from Duranta peruviana Moldenke (Verbenaceae); and A. psidivora from Psidium guajava L. (Myrtaceae).

  18. New isomer and decay half-life of {sup 115}Ru

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurpeta, J.; Plochocki, A.; Rissanen, J.

    2010-12-15

    Exotic, neutron-rich nuclei of mass A=115 produced in proton-induced fission of {sup 238}U were extracted using the IGISOL mass separator. The beam of isobars was transferred to the JYFLTRAP Penning trap system for further separation to the isotopic level. Monoisotopic samples of {sup 115}Ru nuclei were used for {gamma}and {beta} coincidence spectroscopy. In {sup 115}Ru we have observed excited levels, including an isomer with a half-life of 76(6) ms and (7/2{sup -}) spin and parity. The first excited 61.7-keV level in {sup 115}Ru with spins and parity (3/2{sup +}) may correspond to an oblate 3/2{sup +}[431] Nilsson orbital. A half-lifemore » of 318(19) ms for the {beta}{sup -} decay of the (1/2{sup +}) ground state in {sup 115}Ru has been firmly established in two independent measurements, a value which is significantly shorter than that previously reported.« less

  19. Levels in 227Ac populated in the 230Th( p, α) reaction

    NASA Astrophysics Data System (ADS)

    Burke, D. G.; Garrett, P. E.; Qu, Tao

    2003-09-01

    The 230,232Th(p, α) 227,229Ac reactions were studied using 20 MeV protons and a magnetic spectrograph to analyze the reaction products. Relative populations of levels in 229Ac correlated well with previously published (t, α) results for the same final levels, showing that the similarity of the two reactions observed empirically in the deformed rare earth region extends to actinides. The most strongly populated level in 227Ac is at 639 keV, and is assigned as the 1/2 +[4 0 0] bandhead. The 435 keV level, previously adopted as the 1/2 +[6 6 0] bandhead, also has a significant intensity that is attributed to Δ N=2 mixing between these two K=1/2 proton orbitals. The Δ N=2 matrix element estimated from these data is ˜80 keV, similar to values observed for the same two Nilsson states as neutron orbitals in the dysprosium isotopes.

  20. Triplet correlation functions in liquid water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in; Singh, Murari

    Triplet correlations have been shown to play a crucial role in the transformation of simple liquids to anomalous tetrahedral fluids [M. Singh, D. Dhabal, A. H. Nguyen, V. Molinero, and C. Chakravarty, Phys. Rev. Lett. 112, 147801 (2014)]. Here we examine triplet correlation functions for water, arguably the most important tetrahedral liquid, under ambient conditions, using configurational ensembles derived from molecular dynamics (MD) simulations and reverse Monte Carlo (RMC) datasets fitted to experimental scattering data. Four different RMC data sets with widely varying hydrogen-bond topologies fitted to neutron and x-ray scattering data are considered [K. T. Wikfeldt, M. Leetmaa, M.more » P. Ljungberg, A. Nilsson, and L. G. M. Pettersson, J. Phys. Chem. B 113, 6246 (2009)]. Molecular dynamics simulations are performed for two rigid-body effective pair potentials (SPC/E and TIP4P/2005) and the monatomic water (mW) model. Triplet correlation functions are compared with other structural measures for tetrahedrality, such as the O–O–O angular distribution function and the local tetrahedral order distributions. In contrast to the pair correlation functions, which are identical for all the RMC ensembles, the O–O–O triplet correlation function can discriminate between ensembles with different degrees of tetrahedral network formation with the maximally symmetric, tetrahedral SYM dataset displaying distinct signatures of tetrahedrality similar to those obtained from atomistic simulations of the SPC/E model. Triplet correlations from the RMC datasets conform closely to the Kirkwood superposition approximation, while those from MD simulations show deviations within the first two neighbour shells. The possibilities for experimental estimation of triplet correlations of water and other tetrahedral liquids are discussed.« less

  1. β -decay scheme of 140Te to I 140 : Suppression of Gamow-Teller transitions between the neutron h9 /2 and proton h11 /2 partner orbitals

    NASA Astrophysics Data System (ADS)

    Moon, B.; Moon, C.-B.; Odahara, A.; Lozeva, R.; Söderström, P.-A.; Browne, F.; Yuan, C.; Yagi, A.; Hong, B.; Jung, H. S.; Lee, P.; Lee, C. S.; Nishimura, S.; Doornenbal, P.; Lorusso, G.; Sumikama, T.; Watanabe, H.; Kojouharov, I.; Isobe, T.; Baba, H.; Sakurai, H.; Daido, R.; Fang, Y.; Nishibata, H.; Patel, Z.; Rice, S.; Sinclair, L.; Wu, J.; Xu, Z. Y.; Yokoyama, R.; Kubo, T.; Inabe, N.; Suzuki, H.; Fukuda, N.; Kameda, D.; Takeda, H.; Ahn, D. S.; Shimizu, Y.; Murai, D.; Bello Garrote, F. L.; Daugas, J. M.; Didierjean, F.; Ideguchi, E.; Ishigaki, T.; Morimoto, S.; Niikura, M.; Nishizuka, I.; Komatsubara, T.; Kwon, Y. K.; Tshoo, K.

    2017-07-01

    We report for the first time the β -decay scheme of 140Te (Z =52 ) to 140I (Z =53 ), with a specific focus on the Gamow-Teller strength along N =87 isotones. These results were obtained in an experiment performed at the Radioactive Ion Beam Factory (RIBF), RIKEN, where the parent nuclide, 140Te, was produced through the in-flight fission of a 238U beam at 345 MeV per nucleon impinging on a 9Be target. Based on data from the high-efficiency γ -ray spectrometer, EUROBALL-RIKEN Cluster Array (EURICA), we constructed a decay scheme of 140I. The half-life of 140Te has been determined to be 350(5) ms. A level at 926 keV has been assigned as a (1+) state based on the logf t value of 4.89(6). This (1+) state, commonly observed in odd-odd nuclei, can be interpreted in terms of the π h11 /2ν h9 /2 configuration formed by the Gamow-Teller transition between a neutron in the h9 /2 orbital and a proton in the h11 /2 orbital. We observe a sharp contrast to this type of β -decay branching to the lower-lying 1+ states between 140I and 136I, where we see a large reduction as the number of neutrons increases. This is in contrast to the prediction by large-scale shell model calculations. To investigate this type of the suppression, results of the Nilsson model calculations will be discussed. Along the isotones with N =87 , we discuss a characteristic feature of the Gamow-Teller distributions at 1+ states with respect to the isospin difference.

  2. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zucca, J J; Walter, W R; Rodgers, A J

    2008-11-19

    finite difference methods (e.g. Pitarka, 1999; Nilsson et al., 2007). The ability to compute seismic observables using a 3D model is only half of the challenge; models must be developed that accurately represent true Earth structure. Indeed, advances in seismic imaging have followed improvements in 3D computing capability (e.g. Tromp et al., 2005; Rawlinson and Urvoy, 2006). Advances in seismic imaging methods have been fueled in part by theoretical developments and the introduction of novel approaches for combining different seismological observables, both of which can increase the sensitivity of observations to Earth structure. Examples of such developments are finite-frequency sensitivity kernels for body-wave tomography (e.g. Marquering et al., 1998; Montelli et al., 2004) and joint inversion of receiver functions and surface wave group velocities (e.g. Julia et al., 2000).« less

  3. Configuration-constrained cranking Hartree-Fock pairing calculations for sidebands of nuclei

    NASA Astrophysics Data System (ADS)

    Liang, W. Y.; Jiao, C. F.; Wu, Q.; Fu, X. M.; Xu, F. R.

    2015-12-01

    Background: Nuclear collective rotations have been successfully described by the cranking Hartree-Fock-Bogoliubov (HFB) model. However, for rotational sidebands which are built on intrinsic excited configurations, it may not be easy to find converged cranking HFB solutions. The nonconservation of the particle number in the BCS pairing is another shortcoming. To improve the pairing treatment, a particle-number-conserving (PNC) pairing method was suggested. But the existing PNC calculations were performed within a phenomenological one-body potential (e.g., Nilsson or Woods-Saxon) in which one has to deal the double-counting problem. Purpose: The present work aims at an improved description of nuclear rotations, particularly for the rotations of excited configurations, i.e., sidebands. Methods: We developed a configuration-constrained cranking Skyrme Hartree-Fock (SHF) calculation with the pairing correlation treated by the PNC method. The PNC pairing takes the philosophy of the shell model which diagonalizes the Hamiltonian in a truncated model space. The cranked deformed SHF basis provides a small but efficient model space for the PNC diagonalization. Results: We have applied the present method to the calculations of collective rotations of hafnium isotopes for both ground-state bands and sidebands, reproducing well experimental observations. The first up-bendings observed in the yrast bands of the hafnium isotopes are reproduced, and the second up-bendings are predicted. Calculations for rotational bands built on broken-pair excited configurations agree well with experimental data. The band-mixing between two Kπ=6+ bands observed in 176Hf and the K purity of the 178Hf rotational state built on the famous 31 yr Kπ=16+ isomer are discussed. Conclusions: The developed configuration-constrained cranking calculation has been proved to be a powerful tool to describe both the yrast bands and sidebands of deformed nuclei. The analyses of rotational moments of inertia

  4. Monte Carlo Simulations for VLBI2010

    NASA Astrophysics Data System (ADS)

    Wresnik, J.; Böhm, J.; Schuh, H.

    2007-07-01

    Monte Carlo simulations are carried out at the Institute of Geodesy and Geophysics (IGG), Vienna, and at Goddard Space Flight Center (GSFC), Greenbelt (USA), with the goal to design a new geodetic Very Long Baseline Interferometry (VLBI) system. Influences of the schedule, the network geometry and the main stochastic processes on the geodetic results are investigated. Therefore schedules are prepared with the software package SKED (Vandenberg 1999), and different strategies are applied to produce temporally very dense schedules which are compared in terms of baseline length repeatabilities. For the simulation of VLBI observations a Monte Carlo Simulator was set up which creates artificial observations by randomly simulating wet zenith delay and clock values as well as additive white noise representing the antenna errors. For the simulation at IGG the VLBI analysis software OCCAM (Titov et al. 2004) was adapted. Random walk processes with power spectrum densities of 0.7 and 0.1 psec2/sec are used for the simulation of wet zenith delays. The clocks are simulated with Allan Standard Deviations of 1*10^-14 @ 50 min and 2*10^-15 @ 15 min and three levels of white noise, 4 psec, 8 psec and, 16 psec, are added to the artificial observations. The variations of the power spectrum densities of the clocks and wet zenith delays, and the application of different white noise levels show clearly that the wet delay is the critical factor for the improvement of the geodetic VLBI system. At GSFC the software CalcSolve is used for the VLBI analysis, therefore a comparison between the software packages OCCAM and CalcSolve was done with simulated data. For further simulations the wet zenith delay was modeled by a turbulence model. This data was provided by Nilsson T. and was added to the simulation work. Different schedules have been run.

  5. Plasma 1-carbon metabolites and academic achievement in 15-yr-old adolescents

    PubMed Central

    Nilsson, Torbjörn K.; Hurtig-Wennlöf, Anita; Sjöström, Michael; Herrmann, Wolfgang; Obeid, Rima; Owen, Jennifer R.; Zeisel, Steven

    2015-01-01

    Academic achievement in adolescents is correlated with 1-carbon metabolism (1-CM), as folate intake is positively related and total plasma homocysteine (tHcy) negatively related to academic success. Because another 1-CM nutrient, choline is essential for fetal neurocognitive development, we hypothesized that choline and betaine could also be positively related to academic achievement in adolescents. In a sample of 15-yr-old children (n = 324), we measured plasma concentrations of homocysteine, choline, and betaine and genotyped them for 2 polymorphisms with effects on 1-CM, methylenetetrahydrofolate reductase (MTHFR) 677C>T, rs1801133, and phosphatidylethanolamine N-methyltransferase (PEMT), rs12325817 (G>C). The sum of school grades in 17 major subjects was used as an outcome measure for academic achievement. Lifestyle and family socioeconomic status (SES) data were obtained from questionnaires. Plasma choline was significantly and positively associated with academic achievement independent of SES factors (paternal education and income, maternal education and income, smoking, school) and of folate intake (P = 0.009, R2 = 0.285). With the addition of the PEMT rs12325817 polymorphism, the association value was only marginally changed. Plasma betaine concentration, tHcy, and the MTHFR 677C>T polymorphism did not affect academic achievement in any tested model involving choline. Dietary intake of choline is marginal in many adolescents and may be a public health concern.—Nilsson, T. K., Hurtig-Wennlöf, A., Sjöström, M., Herrmann, W., Obeid, R., Owen, J. R., Zeisel, S. Plasma 1-carbon metabolites and academic achievement in 15-yr-old adolescents. PMID:26728177

  6. Flynn effects on sub-factors of episodic and semantic memory: parallel gains over time and the same set of determining factors.

    PubMed

    Rönnlund, Michael; Nilsson, Lars-Göran

    2009-09-01

    The study examined the extent to which time-related gains in cognitive performance, so-called Flynn effects, generalize across sub-factors of episodic memory (recall and recognition) and semantic memory (knowledge and fluency). We conducted time-sequential analyses of data drawn from the Betula prospective cohort study, involving four age-matched samples (35-80 years; N=2996) tested on the same battery of memory tasks on either of four occasions (1989, 1995, 1999, and 2004). The results demonstrate substantial time-related improvements on recall and recognition as well as on fluency and knowledge, with a trend of larger gains on semantic as compared with episodic memory [Rönnlund, M., & Nilsson, L. -G. (2008). The magnitude, generality, and determinants of Flynn effects on forms of declarative memory: Time-sequential analyses of data from a Swedish cohort study. Intelligence], but highly similar gains across the sub-factors. Finally, the association with markers of environmental change was similar, with evidence that historical increases in quantity of schooling was a main driving force behind the gains, both on the episodic and semantic sub-factors. The results obtained are discussed in terms of brain regions involved.

  7. Patient positioning using artificial intelligence neural networks, trained magnetic field sensors and magnetic implants.

    PubMed

    Lennernäs, B; Edgren, M; Nilsson, S

    1999-01-01

    The purpose of this study was to evaluate the precision of a sensor and to ascertain the maximum distance between the sensor and the magnet, in a magnetic positioning system for external beam radiotherapy using a trained artificial intelligence neural network for position determination. Magnetic positioning for radiotherapy, previously described by Lennernäs and Nilsson, is a functional technique, but it is time consuming. The sensors are large and the distance between the sensor and the magnetic implant is limited to short distances. This paper presents a new technique for positioning, using an artificial intelligence neural network, which was trained to position the magnetic implant with at least 0.5 mm resolution in X and Y dimensions. The possibility of using the system for determination in the Z dimension, that is the distance between the magnet and the sensor, was also investigated. After training, this system positioned the magnet with a mean error of maximum 0.15 mm in all dimensions and up to 13 mm from the sensor. Of 400 test positions, 8 determinations had an error larger than 0.5 mm, maximum 0.55 mm. A position was determined in approximately 0.01 s.

  8. Proceedings of the seminar on Leak-Before-Break: Progress in regulatory policies and supporting research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashima, K.; Wilkowski, G.M.

    1988-03-01

    The third in a series of international Leak-Before-Break (LBB) Seminars supported in part by the US Nuclear Regulatory Commission was held at TEPCO Hall in the Tokyo Electric Power Company's (TEPCO) Electric Power Museum on May 14 and 15, 1987. The seminar updated the international policies and supporting research on LBB. Attendees included representatives from regulatory agencies, electric utility representatives, fabricators of nuclear power plants, research organizations, and university professors. Regulatory policy was the subject of presentations by Mr. G. Arlotto (US NRC, USA), Dr. H. Schultz (GRS, W. Germany), Dr. P. Milella (ENEA-DISP, Italy), Dr. C. Faidy, P. Jamet,more » and S. Bhandari (EDF/Septen, CEA/CEN, and Framatome, France), and Mr. T. Fukuzawa (MITI, Japan). Dr. F. Nilsson presented revised nondestructive inspection requirements relative to LBB in Sweden. In addition, several papers on the supporting research programs discussed regulatory policy. Questions following the presentations of the papers focused on the impact of various LBB policies or the impact of research findings. Supporting research programs were reviewed on the first and second day by several participants from the US, Japan, Germany, Canada, Italy, Sweden, England, and France.« less

  9. A survey of wild marine fish identifies a potential origin of an outbreak of viral haemorrhagic septicaemia in wrasse, Labridae, used as cleaner fish on marine Atlantic salmon, Salmo salar L., farms.

    PubMed

    Wallace, I S; Donald, K; Munro, L A; Murray, W; Pert, C C; Stagg, H; Hall, M; Bain, N

    2015-06-01

    Viral haemorrhagic septicaemia virus (VHSV) was isolated from five species of wrasse (Labridae) used as biological controls for parasitic sea lice predominantly, Lepeophtheirus salmonis (Krøyer, 1837), on marine Atlantic salmon, Salmo salar L., farms in Shetland. As part of the epidemiological investigation, 1400 wild marine fish were caught and screened in pools of 10 for VHSV using virus isolation. Eleven pools (8%) were confirmed VHSV positive from: grey gurnard, Eutrigla gurnardus L.; Atlantic herring, Clupea harengus L.; Norway pout, Trisopterus esmarkii (Nilsson); plaice, Pleuronectes platessa L.; sprat, Sprattus sprattus L. and whiting, Merlangius merlangus L. The isolation of VHSV from grey gurnard is the first documented report in this species. Nucleic acid sequencing of the partial nucleocapsid (N) and glycoprotein (G) genes was carried out for viral characterization. Sequence analysis confirmed that all wild isolates were genotype III the same as the wrasse and there was a close genetic similarity between the isolates from wild fish and wrasse on the farms. Infection from these local wild marine fish is the most likely source of VHSV isolated from wrasse on the fish farms. © 2014 Crown Copyright. Journal of Fish Diseases © 2014 John Wiley & Sons Ltd.

  10. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  11. Single-particle and collective excitations in Ni 62

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albers, M.; Zhu, S.; Ayangeakaa, A. D.

    In this study, level sequences of rotational character have been observed in several nuclei in the A = 60 mass region. The importance of the deformation-driving πf 7/2 and νg 9/2 orbitals on the onset of nuclear deformation is stressed. A measurement was performed in order to identify collective rotational structures in the relatively neutron-rich 62Ni isotope. Here, the 26Mg( 48Ca,2α4nγ) 62Ni complex reaction at beam energies between 275 and 320 MeV was utilized. Reaction products were identified in mass (A) and charge (Z) with the fragment mass analyzer (FMA) and γ rays were detected with the Gammasphere array. Asmore » a result, two collective bands, built upon states of single-particle character, were identified and sizable deformation was assigned to both sequences based on the measured transitional quadrupole moments, herewith quantifying the deformation at high spin. In conclusion, based on cranked Nilsson-Strutinsky calculations and comparisons with deformed bands in the A = 60 mass region, the two rotational bands are understood as being associated with configurations involving multiple f 7/2 protons and g 9/2 neutrons, driving the nucleus to sizable prolate deformation.« less

  12. Single-particle and collective excitations in Ni 62

    DOE PAGES

    Albers, M.; Zhu, S.; Ayangeakaa, A. D.; ...

    2016-09-01

    In this study, level sequences of rotational character have been observed in several nuclei in the A = 60 mass region. The importance of the deformation-driving πf 7/2 and νg 9/2 orbitals on the onset of nuclear deformation is stressed. A measurement was performed in order to identify collective rotational structures in the relatively neutron-rich 62Ni isotope. Here, the 26Mg( 48Ca,2α4nγ) 62Ni complex reaction at beam energies between 275 and 320 MeV was utilized. Reaction products were identified in mass (A) and charge (Z) with the fragment mass analyzer (FMA) and γ rays were detected with the Gammasphere array. Asmore » a result, two collective bands, built upon states of single-particle character, were identified and sizable deformation was assigned to both sequences based on the measured transitional quadrupole moments, herewith quantifying the deformation at high spin. In conclusion, based on cranked Nilsson-Strutinsky calculations and comparisons with deformed bands in the A = 60 mass region, the two rotational bands are understood as being associated with configurations involving multiple f 7/2 protons and g 9/2 neutrons, driving the nucleus to sizable prolate deformation.« less

  13. Nuclear orientation of antimony and bromine isotopes

    NASA Astrophysics Data System (ADS)

    Barham, Christopher G.

    The technique of Low Temperature Nuclear Orientation has been used to study neutron deficient antimony and bromine isotopes. The antimony and bromine isotopes were produced using Daresbury Laboratory's Nuclear Sctructure Facility by the reactions [28]Si([93]Nb) and [28]Si([54]Fe) respectively, both at 150 MeV. Further anisotropy measurements on [72.74m,75]Br at lower temperature have been used to extend previous data. The magnetic moment of [72]Br has been limited to be within the range 0.54mu[N] Nilsson configurations. The sign of magnetic moments of [74m,75] Br have been determined to be positive. Several spin assignments have been made in [74]Se, the daughter of [74m]Br. The magnetic moment of [115]Sb has been measured by the technique of NMR- ON to be 3.457(19)mu[N]. Anisotropy measurements have been used to measure the moment of [114]Sb as 1.74(12)mu[N]. This value indicates a change from the mud[5]/2⊗vs[1/2] configuration of [116]Sb to a mud[5]/2⊗vg[7/2] configuration. Anisotropy measurements have also been used to assign positive moments to [113,114,115]Sb.

  14. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  15. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  16. Critical temperature for shape transition in hot nuclei within covariant density functional theory

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Niu, Y. F.

    2018-05-01

    Prompted by the simple proportional relation between critical temperature for pairing transition and pairing gap at zero temperature, we investigate the relation between critical temperature for shape transition and ground-state deformation by taking even-even Cm-304286 isotopes as examples. The finite-temperature axially deformed covariant density functional theory with BCS pairing correlation is used. Since the Cm isotopes are the newly proposed nuclei with octupole correlations, we studied in detail the free energy surface, the Nilsson single-particle (s.p.) levels, and the components of s.p. levels near the Fermi level in 292Cm. Through this study, the formation of octupole equilibrium is understood by the contribution coming from the octupole driving pairs with Ω [N ,nz,ml] and Ω [N +1 ,nz±3 ,ml] for single-particle levels near the Fermi surfaces as it provides a good manifestation of the octupole correlation. Furthermore, the systematics of deformations, pairing gaps, and the specific heat as functions of temperature for even-even Cm-304286 isotopes are discussed. Similar to the relation between the critical pairing transition temperature and the pairing gap at zero temperature Tc=0.6 Δ (0 ) , a proportional relation between the critical shape transition temperature and the deformation at zero temperature Tc=6.6 β (0 ) is found for both octupole shape transition and quadrupole shape transition for the isotopes considered.

  17. Models and role models.

    PubMed

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  18. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  19. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  20. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  1. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  2. Global Carbon Cycle Modeling in GISS ModelE2 GCM

    NASA Astrophysics Data System (ADS)

    Aleinov, I. D.; Kiang, N. Y.; Romanou, A.; Romanski, J.

    2014-12-01

    Consistent and accurate modeling of the Global Carbon Cycle remains one of the main challenges for the Earth System Models. NASA Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (GCM) was recently equipped with a complete Global Carbon Cycle algorithm, consisting of three integrated components: Ent Terrestrial Biosphere Model (Ent TBM), Ocean Biogeochemistry Module and atmospheric CO2 tracer. Ent TBM provides CO2 fluxes from the land surface to the atmosphere. Its biophysics utilizes the well-known photosynthesis functions of Farqhuar, von Caemmerer, and Berry and Farqhuar and von Caemmerer, and stomatal conductance of Ball and Berry. Its phenology is based on temperature, drought, and radiation fluxes, and growth is controlled via allocation of carbon from labile carbohydrate reserve storage to different plant components. Soil biogeochemistry is based on the Carnegie-Ames-Stanford (CASA) model of Potter et al. Ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. Atmospheric CO2 is advected with a quadratic upstream algorithm implemented in atmospheric part of ModelE2. Here we present the results for pre-industrial equilibrium and modern transient simulations and provide comparison to available observations. We also discuss the process of validation and tuning of particular algorithms used in the model.

  3. Target Scattering Metrics: Model-Model and Model-Data Comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  4. Target Scattering Metrics: Model-Model and Model Data comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  5. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  6. [Bone remodeling and modeling/mini-modeling.

    PubMed

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  7. Vector models and generalized SYK models

    DOE PAGES

    Peng, Cheng

    2017-05-23

    Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  8. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-09-08

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. Copyright © 2014 John Wiley & Sons, Inc.

  9. Geologic Framework Model Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompassmore » the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models

  10. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  11. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  12. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    PubMed

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  13. Translating building information modeling to building energy modeling using model view definition.

    PubMed

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  14. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    PubMed Central

    Kim, Jong Bum; Clayton, Mark J.; Haberl, Jeff S.

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process. PMID:25309954

  15. Models Archive and ModelWeb at NSSDC

    NASA Astrophysics Data System (ADS)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  16. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  17. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    PubMed

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  18. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual

  19. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  20. Modeling complexes of modeled proteins.

    PubMed

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Controls on the Climates of Tidally Locked Terrestrial Planets

    NASA Astrophysics Data System (ADS)

    Yang, J.; Cowan, N. B.; Abbot, D. S.

    2013-12-01

    (OET) is included, although the compensation between AET and OET is incomplete. To summarize, we are able to construct a realistic low-order model for the climate of tidally locked terrestrial planets, including the cloud behavior, using only the two constraints. This bodes well for the interpretation of complex GCMs and future observations of such planets using, for example, the James Webb Space Telescope. Cited papers: [1]. Sobel, A. H., J. Nilsson and L. M. Polvani: The weak temperature gradient approximation and balanced tropical moisture waves, J. Atmos. Sci., 58, 3650-65, 2001. [2]. Hartmann, D. L. and K. Larson, An important constraint on tropical cloud-climate feedback, Geophys. Res. Lett., 29, 1951-54, 2002. [3]. Yang, J., N. B. Cowan and D. S. Abbot: Stabilizing cloud feedback dramatically expands the habitable zone of tidally locked planets, ApJ. Lett., 771, L45, 2013.

  2. Metabolic network modeling with model organisms.

    PubMed

    Yilmaz, L Safak; Walhout, Albertha Jm

    2017-02-01

    Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. Published by Elsevier Ltd.

  3. Metabolic network modeling with model organisms

    PubMed Central

    Yilmaz, L. Safak; Walhout, Albertha J.M.

    2017-01-01

    Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. PMID:28088694

  4. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  5. EasyModeller: A graphical interface to MODELLER

    PubMed Central

    2010-01-01

    Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system. PMID:20712861

  6. Skyrme random-phase-approximation description of lowest Kπ=2γ+ states in axially deformed nuclei

    NASA Astrophysics Data System (ADS)

    Nesterenko, V. O.; Kartavenko, V. G.; Kleinig, W.; Kvasil, J.; Repko, A.; Jolos, R. V.; Reinhard, P.-G.

    2016-03-01

    The lowest quadrupole γ -vibrational Kπ=2+ states in axially deformed rare-earth (Nd, Sm, Gd, Dy, Er, Yb, Hf, W) and actinide (U) nuclei are systematically investigated within the separable random-phase-approximation (SRPA) based on the Skyrme functional. The energies Eγ and reduced transition probabilities B (E 2 ) of 2γ+ states are calculated with the Skyrme forces SV-bas and SkM*. The energies of two-quasiparticle configurations forming the SRPA basis are corrected by using the pairing blocking effect. This results in a systematic downshift of Eγ by 0.3-0.5 MeV and thus in a better agreement with the experiment, especially in Sm, Gd, Dy, Hf, and W regions. For other isotopic chains, a noticeable overestimation of Eγ and too weak collectivity of 2γ+ states still persist. It is shown that domains of nuclei with low and high 2γ+ collectivity are related to the structure of the lowest two-quasiparticle states and conservation of the Nilsson selection rules. The description of 2γ+ states with SV-bas and SkM* is similar in light rare-earth nuclei but deviates in heavier nuclei. However SV-bas much better reproduces the quadrupole deformation and energy of the isoscalar giant quadrupole resonance. The accuracy of SRPA is justified by comparison with exact RPA. The calculations suggest that a further development of the self-consistent calculation schemes is needed for a systematic satisfactory description of the 2γ+ states.

  7. From cluster structures to nuclear molecules: The role of nodal structure of the single-particle wave functions

    NASA Astrophysics Data System (ADS)

    Afanasjev, A. V.; Abusara, H.

    2018-02-01

    The nodal structure of the density distributions of the single-particle states occupied in rod-shaped, hyper- and megadeformed structures of nonrotating and rotating N ˜Z nuclei has been investigated in detail. The single-particle states with the Nilsson quantum numbers of the [N N 0 ]1 /2 (with N from 0 to 5) and [N ,N -1 ,1 ]Ω (with N from 1 to 3 and Ω =1 /2 , 3/2) types are considered. These states are building blocks of extremely deformed shapes in the nuclei with mass numbers A ≤50 . Because of (near) axial symmetry and large elongation of such structures, the wave functions of the single-particle states occupied are dominated by a single basis state in cylindrical basis. This basis state defines the nodal structure of the single-particle density distribution. The nodal structure of the single-particle density distributions allows us to understand in a relatively simple way the necessary conditions for α clusterization and the suppression of the α clusterization with the increase of mass number. It also explains in a natural way the coexistence of ellipsoidal mean-field-type structures and nuclear molecules at similar excitation energies and the features of particle-hole excitations connecting these two types of the structures. Our analysis of the nodal structure of the single-particle density distributions does not support the existence of quantum liquid phase for the deformations and nuclei under study.

  8. Determination of Soil Moisture Content using Laboratory Experimental and Field Electrical Resistivity Values

    NASA Astrophysics Data System (ADS)

    Hazreek, Z. A. M.; Rosli, S.; Fauziah, A.; Wijeyesekera, D. C.; Ashraf, M. I. M.; Faizal, T. B. M.; Kamarudin, A. F.; Rais, Y.; Dan, M. F. Md; Azhar, A. T. S.; Hafiz, Z. M.

    2018-04-01

    The efficiency of civil engineering structure require comprehensive geotechnical data obtained from site investigation. In the past, conventional site investigation was heavily related to drilling techniques thus suffer from several limitations such as time consuming, expensive and limited data collection. Consequently, this study presents determination of soil moisture content using laboratory experimental and field electrical resistivity values (ERV). Field and laboratory electrical resistivity (ER) test were performed using ABEM SAS4000 and Nilsson400 soil resistance meter. Soil sample used for resistivity test was tested for characterization test specifically on particle size distribution and moisture content test according to BS1377 (1990). Field ER data was processed using RES2DINV software while laboratory ER data was analyzed using SPSS and Excel software. Correlation of ERV and moisture content shows some medium relationship due to its r = 0.506. Moreover, coefficient of determination, R2 analyzed has demonstrate that the statistical correlation obtain was very good due to its R2 value of 0.9382. In order to determine soil moisture content based on statistical correlation (w = 110.68ρ-0.347), correction factor, C was established through laboratory and field ERV given as 19.27. Finally, this study has shown that soil basic geotechnical properties with particular reference to water content was applicably determined using integration of laboratory and field ERV data analysis thus able to compliment conventional approach due to its economic, fast and wider data coverage.

  9. The Onsala Twin Telescope Project

    NASA Astrophysics Data System (ADS)

    Haas, R.

    2013-08-01

    This paper described the Onsala Twin Telescope project. The project aims at the construction of two new radio telescopes at the Onsala Space Observatory, following the VLBI2010 concept. The project starts in 2013 and is expected to be finalized within 4 years. Z% O. Rydbeck. Chalmers Tekniska Högskola, Göteborg, ISBN 91-7032-621-5, 407-823, 1991. B. Petrachenko, A. Niell, D. Behrend, B. Corey, J. Böhm, P. Charlot, A. Collioud, J. Gipson, R. Haas, Th. Hobiger, Y. Koyama, D. MacMillan, Z. Malkin, T. Nilsson, A. Pany, G. Tuccari, A. Whitney, and J. Wresnik. Design Aspects of the VLBI2010 System. NASA/TM-2009-214180, 58 pp., 2009. R. Haas, G. Elgered, J. Löfgren, T. Ning, and H.-G. Scherneck. Onsala Space Observatory - IVS Network Station. In K. D. Baver and D. Behrend, editors, International VLBI Service for Geodesy and Astrometry 2011 Annual Report, NASA/TP-2012-217505, 88-91, 2012. H.-G. Scherneck, G. Elgered, J. M. Johansson, and B. O. Rönnäng. Phys. Chem. Earth, Vol. 23, No. 7-8, 811-823, 1998. A. R. Whitney. Ph.D. thesis, Dept. of Electrical engineering, MIT Cambridge, MA., 1974. B. A. Harper, J. D. Kepert, and J. D. Ginger. Guidelines for converting between various wind averaging periods in tropical cyclone conditions. WMO/TD-No. 1555, 64 pp., 2010 (available at \\url{http://www.wmo.int/pages/prog/www/tcp/documents/WMO_TD_1555_en.pdf})

  10. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  11. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  12. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  13. Leadership Models.

    ERIC Educational Resources Information Center

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  14. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.

  15. Better models are more effectively connected models

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  16. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  17. SUMMA and Model Mimicry: Understanding Differences Among Land Models

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.

    2016-12-01

    Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in

  18. Modeller's attitude in catchment modelling: a comparative study

    NASA Astrophysics Data System (ADS)

    Battista Chirico, Giovanni

    2010-05-01

    Ten modellers have been invited to predict, independently from each other, the discharge of the artificial Chicken Creek catchment in North-East Germany for simulation period of three years, providing them only soil texture, terrain and meteorological data. No data concerning the discharge or other sources of state variables and fluxes within the catchment have been provided. Modellers had however the opportunity to visit the experimental catchment and inspect areal photos of the catchments since its initial development stage. This study has been a unique comparative study focussing on how different modellers deal with the key issues in predicting the discharge in ungauged catchments: 1) choice of the model structure; 2) identification of model parameters; 3) identification of model initial and boundary conditions. The first general lesson learned during this study was that the modeller is just part of the entire modelling process and has a major bearing on the model results, particularly in ungauged catchments where there are more degrees of freedom in making modelling decisions. Modellers' attitudes during the stages of the model implementation and parameterisation have been deeply influenced by their own experience from previous modelling studies. A common outcome was that modellers have been mainly oriented to apply process-based models able to exploit the available data concerning the physical properties of the catchment and therefore could be more suitable to cope with the lack of data concerning state variables or fluxes. The second general lesson learned during this study was the role of dominant processes. We believed that the modelling task would have been much easier in an artificial catchment, where heterogeneity were expected to be negligible and processes simpler, than in catchments that have evolved over a longer time period. The results of the models were expected to converge, and this would have been a good starting point to proceed for a model

  19. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  20. Building Mental Models by Dissecting Physical Models

    ERIC Educational Resources Information Center

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  1. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    PubMed Central

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  2. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  3. Modelling, teachers' views on the nature of modelling, and implications for the education of modellers

    NASA Astrophysics Data System (ADS)

    Justi, Rosária S.; Gilbert, John K.

    2002-04-01

    In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.

  4. Neuropeptides and nitric oxide synthase in the gill and the air-breathing organs of fishes.

    PubMed

    Zaccone, Giacomo; Mauceri, Angela; Fasulo, Salvatore

    2006-05-01

    Anatomical and histochemical studies have demonstrated that the bulk of autonomic neurotransmission in fish gill is attributed to cholinergic and adrenergic mechanisms (Nilsson. 1984. In: Hoar WS, Randall DJ, editors. Fish physiology, Vol. XA. Orlando: Academic Press. p 185-227; Donald. 1998. In: Evans DH, editor. The physiology of fishes, 2nd edition. Boca Raton: CRC Press. p 407-439). In many tissues, blockade of adrenergic and cholinergic transmission results in residual responses to nerve stimulation, which are termed NonAdrenergic, NonCholinergic (NANC). The discovery of nitric oxide (NO) has provided a basis for explaining many examples of NANC transmissions with accumulated physiological and pharmacological data indicating its function as a primary NANC transmitter. Little is known about the NANC neurotransmission, and studies on neuropeptides and NOS (Nitric Oxide Synthase) are very fragmentary in the gill and the air-breathing organs of fishes. Knowledge of the distribution of nerves and effects of perfusing agonists may help to understand the mechanisms of perfusion regulation in the gill (Olson. 2002. J Exp Zool 293:214-231). Air breathing as a mechanism for acquiring oxygen has evolved independently in several groups of fishes, necessitating modifications of the organs responsible for the exchange of gases. Aquatic hypoxia in freshwaters has been probably the more important selective force in the evolution of air breathing in vertebrates. Fishes respire with gills that are complex structures with many different effectors and potential control systems. Autonomic innervation of the gill has received considerable attention. An excellent review on branchial innervation includes Sundin and Nilsson's (2002. J Exp Zool 293:232-248) with an emphasis on the anatomy and basic functioning of afferent and efferent fibers of the branchial nerves. The chapters by Evans (2002. J Exp Zool 293:336-347) and Olson (2002) provide new challenges about a variety of

  5. NARSTO critical review of photochemical models and modeling

    NASA Astrophysics Data System (ADS)

    Russell, Armistead; Dennis, Robin

    Photochemical air quality models play a central role in both schentific investigation of how pollutants evlove in the atmosphere as well as developing policies to manage air quality. In the past 30 years, these models have evolved from rather crude representations of the physics and chemistry impacting trace species to their current state: comprehensive, but not complete. The evolution has included advancements in not only the level of process descriptions, but also the computational implementation, including numerical methods. As part of the NARSTO Critical Reviews, this article discusses the current strengths and weaknesses of air quality models and the modeling process. Current Eulerian models are found to represent well the primary processes impacting the evolution of trace species in most cases though some exceptions may exist. For example, sub-grid-scale processes, such as concentrated power plant plumes, are treated only approximately. It is not apparent how much such approximations affect their results and the polices based upon those results. A significant weakness has been in how investigators have addressed, and communicated, such uncertainties. Studies find that major uncertainties are due to model inputs, e.g., emissions and meteorology, more so than the model itself. One of the primary weakness identified is in the modeling process, not the models. Evaluation has been limited both due to data constraints. Seldom is there ample observational data to conduct a detailed model intercomparison using consistent data (e.g., the same emissions and meteorology). Further model advancement, and development of greater confidence in the use of models, is hampered by the lack of thorough evaluation and intercomparisons. Model advances are seen in the use of new tools for extending the interpretation of model results, e.g., process and sensitivity analysis, modeling systems to facilitate their use, and extension of model capabilities, e.g., aerosol dynamics

  6. Coupling Climate Models and Forward-Looking Economic Models

    NASA Astrophysics Data System (ADS)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward

  7. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-30

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2). The PMM models petroleum refining activities, the marketing of products, the production of natural gas liquids and domestic methanol, projects petroleum provides and sources of supplies for meeting demand. In addition, the PMMmore » estimates domestic refinery capacity expansion and fuel consumption.« less

  8. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders

  9. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  10. Models for Models: An Introduction to Polymer Models Employing Simple Analogies

    NASA Astrophysics Data System (ADS)

    Tarazona, M. Pilar; Saiz, Enrique

    1998-11-01

    An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.

  11. Hybrid Model of IRT and Latent Class Models.

    ERIC Educational Resources Information Center

    Yamamoto, Kentaro

    This study developed a hybrid of item response theory (IRT) models and latent class models, which combined the strengths of each type of model. The primary motivation for developing the new model is to describe characteristics of examinees' knowledge at the time of the examination. Hence, the application of the model lies mainly in so-called…

  12. New 3D model for dynamics modeling

    NASA Astrophysics Data System (ADS)

    Perez, Alain

    1994-05-01

    The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.

  13. Bayesian model evidence as a model evaluation metric

    NASA Astrophysics Data System (ADS)

    Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang

    2017-04-01

    When building environmental systems models, we are typically confronted with the questions of how to choose an appropriate model (i.e., which processes to include or neglect) and how to measure its quality. Various metrics have been proposed that shall guide the modeller towards a most robust and realistic representation of the system under study. Criteria for evaluation often address aspects of accuracy (absence of bias) or of precision (absence of unnecessary variance) and need to be combined in a meaningful way in order to address the inherent bias-variance dilemma. We suggest using Bayesian model evidence (BME) as a model evaluation metric that implicitly performs a tradeoff between bias and variance. BME is typically associated with model weights in the context of Bayesian model averaging (BMA). However, it can also be seen as a model evaluation metric in a single-model context or in model comparison. It combines a measure for goodness of fit with a penalty for unjustifiable complexity. Unjustifiable refers to the fact that the appropriate level of model complexity is limited by the amount of information available for calibration. Derived in a Bayesian context, BME naturally accounts for measurement errors in the calibration data as well as for input and parameter uncertainty. BME is therefore perfectly suitable to assess model quality under uncertainty. We will explain in detail and with schematic illustrations what BME measures, i.e. how complexity is defined in the Bayesian setting and how this complexity is balanced with goodness of fit. We will further discuss how BME compares to other model evaluation metrics that address accuracy and precision such as the predictive logscore or other model selection criteria such as the AIC, BIC or KIC. Although computationally more expensive than other metrics or criteria, BME represents an appealing alternative because it provides a global measure of model quality. Even if not applicable to each and every case, we aim

  14. Integrative structure modeling with the Integrative Modeling Platform.

    PubMed

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  15. Modeling volatility using state space models.

    PubMed

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  16. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  17. Downscaling GISS ModelE Boreal Summer Climate over Africa

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  18. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  19. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  20. Mineralogic Model (MM3.0) Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing),more » and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components

  1. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated

  2. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  3. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  4. TUNS/TCIS information model/process model

    NASA Technical Reports Server (NTRS)

    Wilson, James

    1992-01-01

    An Information Model is comprised of graphical and textual notation suitable for describing and defining the problem domain - in our case, TUNS or TCIS. The model focuses on the real world under study. It identifies what is in the problem and organizes the data into a formal structure for documentation and communication purposes. The Information Model is composed of an Entity Relationship Diagram (ERD) and a Data Dictionary component. The combination of these components provide an easy to understand methodology for expressing the entities in the problem space, the relationships between entities and the characteristics (attributes) of the entities. This approach is the first step in information system development. The Information Model identifies the complete set of data elements processed by TUNS. This representation provides a conceptual view of TUNS from the perspective of entities, data, and relationships. The Information Model reflects the business practices and real-world entities that users must deal with.

  5. Modelling human skull growth: a validated computational model

    PubMed Central

    Marghoub, Arsalan; Johnson, David; Khonsari, Roman H.; Fagan, Michael J.; Moazen, Mehran

    2017-01-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions (n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. PMID:28566514

  6. Modelling human skull growth: a validated computational model.

    PubMed

    Libby, Joseph; Marghoub, Arsalan; Johnson, David; Khonsari, Roman H; Fagan, Michael J; Moazen, Mehran

    2017-05-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions ( n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. © 2017 The Author(s).

  7. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    NASA Technical Reports Server (NTRS)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  8. A Distributed Snow Evolution Modeling System (SnowModel)

    NASA Astrophysics Data System (ADS)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  9. Implementing multiresolution models and families of models: from entity-level simulation to desktop stochastic models and "repro" models

    NASA Astrophysics Data System (ADS)

    McEver, Jimmie; Davis, Paul K.; Bigelow, James H.

    2000-06-01

    We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.

  10. The Relationships Between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    NASA Astrophysics Data System (ADS)

    Cardoso Mendonça, Paula Cristina; Justi, Rosária

    2013-09-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities were planned from the transposition of the main modelling stages that constitute the 'Model of Modelling Diagram' so that students could experience each of such stages. All the lessons were video recorded and their transcriptions supported the elaboration of case studies for each group of students. From the analysis of the case studies, we identified argumentative situations when students performed all of the modelling stages. Our data show that the argumentative situations were related to sense making, articulating and persuasion purposes, and were closely related to the generation of explanations in the modelling processes. They also show that representations are important resources for argumentation. Our results are consistent with some of those already reported in the literature regarding the relationship between modelling and argumentation, but are also divergent when they show that argumentation is not only related to the model evaluation phase.

  11. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  12. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    NASA Astrophysics Data System (ADS)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  13. A Model for Math Modeling

    ERIC Educational Resources Information Center

    Lin, Tony; Erfan, Sasan

    2016-01-01

    Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…

  14. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  15. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  16. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    PubMed

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  17. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  18. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  19. Climate Models

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.

    2012-01-01

    Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

  20. Predicting category intuitiveness with the rational model, the simplicity model, and the generalized context model.

    PubMed

    Pothos, Emmanuel M; Bailey, Todd M

    2009-07-01

    Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM-the authors refer to this way of applying the GCM as "unsupervised GCM." The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness.

  1. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  2. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  3. Modelling MIZ dynamics in a global model

    NASA Astrophysics Data System (ADS)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  4. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  5. Modeling arson - An exercise in qualitative model building

    NASA Technical Reports Server (NTRS)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  6. Air Quality Dispersion Modeling - Alternative Models

    EPA Pesticide Factsheets

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  7. Model Organisms and Traditional Chinese Medicine Syndrome Models

    PubMed Central

    Xu, Jin-Wen

    2013-01-01

    Traditional Chinese medicine (TCM) is an ancient medical system with a unique cultural background. Nowadays, more and more Western countries due to its therapeutic efficacy are accepting it. However, safety and clear pharmacological action mechanisms of TCM are still uncertain. Due to the potential application of TCM in healthcare, it is necessary to construct a scientific evaluation system with TCM characteristics and benchmark the difference from the standard of Western medicine. Model organisms have played an important role in the understanding of basic biological processes. It is easier to be studied in certain research aspects and to obtain the information of other species. Despite the controversy over suitable syndrome animal model under TCM theoretical guide, it is unquestionable that many model organisms should be used in the studies of TCM modernization, which will bring modern scientific standards into mysterious ancient Chinese medicine. In this review, we aim to summarize the utilization of model organisms in the construction of TCM syndrome model and highlight the relevance of modern medicine with TCM syndrome animal model. It will serve as the foundation for further research of model organisms and for its application in TCM syndrome model. PMID:24381636

  8. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  9. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  10. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  11. OVERTURNING THE CASE FOR GRAVITATIONAL POWERING IN THE PROTOTYPICAL COOLING LYα NEBULA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Moire K. M.; Fynbo, Johan P. U.; Momcheva, Ivelina

    The Nilsson et al. Lyα nebula has often been cited as the most plausible example of an Lyα nebula powered by gravitational cooling. In this paper, we bring together new data from the Hubble Space Telescope and the Herschel Space Observatory as well as comparisons to recent theoretical simulations in order to revisit the questions of the local environment and most likely power source for the Lyα nebula. In contrast to previous results, we find that this Lyα nebula is associated with six nearby galaxies and an obscured AGN that is offset by ∼4″ ≈ 30 kpc from the Lyαmore » peak. The local region is overdense relative to the field, by a factor of ∼10, and at low surface brightness levels the Lyα emission appears to encircle the position of the obscured AGN, highly suggestive of a physical association. At the same time, we confirm that there is no compact continuum source located within ∼2–3″ ≈ 15–23 kpc of the Lyα peak. Since the latest cold accretion simulations predict that the brightest Lyα emission will be coincident with a central growing galaxy, we conclude that this is actually a strong argument against, rather than for, the idea that the nebula is gravitationally powered. While we may be seeing gas within cosmic filaments, this gas is primarily being lit up, not by gravitational energy, but due to illumination from a nearby buried AGN.« less

  12. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  13. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical

  14. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    DTIC Science & Technology

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  15. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan

  16. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic

  17. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  18. EIA model documentation: Petroleum market model of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-28

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supplymore » for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level.« less

  19. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  20. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  1. AIR QUALITY MODELING OF AMMONIA: A REGIONAL MODELING PERSPECTIVE

    EPA Science Inventory

    The talk will address the status of modeling of ammonia from a regional modeling perspective, yet the observations and comments should have general applicability. The air quality modeling system components that are central to modeling ammonia will be noted and a perspective on ...

  2. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    NASA Astrophysics Data System (ADS)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the

  3. The medical model versus the just deserts model.

    PubMed

    Wolfgang, M E

    1988-01-01

    This paper traces the history of two models that have been influential in shaping modern views toward criminals. One of these two--the medical model--is based on the concept of rehabilitation, that is, treatment predicated on the attributes of the offender. The second of these two--the just deserts model--centers on retribution, that is, punishment deserved for the seriousness of the crime. Each model has been dominant in various periods of history.

  4. DRI Model of the U.S. Economy -- Model Documentation

    EIA Publications

    1993-01-01

    Provides documentation on Data Resources, Inc., DRI Model of the U.S. Economy and the DRI Personal Computer Input/Output Model. It also describes the theoretical basis, structure and functions of both DRI models; and contains brief descriptions of the models and their equations.

  5. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  6. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  7. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  8. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  9. Modelling Students' Construction of Energy Models in Physics.

    ERIC Educational Resources Information Center

    Devi, Roshni; And Others

    1996-01-01

    Examines students' construction of experimentation models for physics theories in energy storage, transformation, and transfers involving electricity and mechanics. Student problem solving dialogs and artificial intelligence modeling of these processes is analyzed. Construction of models established relations between elements with linear causal…

  10. An empirical model to forecast solar wind velocity through statistical modeling

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Ridley, A. J.

    2013-12-01

    The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.

  11. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  12. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Spin-Up and Tuning of the Global Carbon Cycle Model Inside the GISS ModelE2 GCM

    NASA Technical Reports Server (NTRS)

    Aleinov, Igor; Kiang, Nancy Y.; Romanou, Anastasia

    2015-01-01

    Planetary carbon cycle involves multiple phenomena, acting at variety of temporal and spacial scales. The typical times range from minutes for leaf stomata physiology to centuries for passive soil carbon pools and deep ocean layers. So, finding a satisfactory equilibrium state becomes a challenging and computationally expensive task. Here we present the spin-up processes for different configurations of the GISS Carbon Cycle model from the model forced with MODIS observed Leaf Area Index (LAI) and prescribed ocean to the prognostic LAI and to the model fully coupled to the dynamic ocean and ocean biology. We investigate the time it takes the model to reach the equilibrium and discuss the ways to speed up this process. NASA Goddard Institute for Space Studies General Circulation Model (GISS ModelE2) is currently equipped with all major algorithms necessary for the simulation of the Global Carbon Cycle. The terrestrial part is presented by Ent Terrestrial Biosphere Model (Ent TBM), which includes leaf biophysics, prognostic phenology and soil biogeochemistry module (based on Carnegie-Ames-Stanford model). The ocean part is based on the NASA Ocean Biogeochemistry Model (NOBM). The transport of atmospheric CO2 is performed by the atmospheric part of ModelE2, which employs quadratic upstream algorithm for this purpose.

  14. Culturicon model: A new model for cultural-based emoticon

    NASA Astrophysics Data System (ADS)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  15. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  16. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  17. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  18. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    PubMed

    Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  19. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    PubMed Central

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  20. Traffic-Related Air Pollution and Dementia Incidence in Northern Sweden: A Longitudinal Study

    PubMed Central

    Oudin, Anna; Forsberg, Bertil; Adolfsson, Annelie Nordin; Lind, Nina; Modig, Lars; Nordin, Maria; Nordin, Steven; Adolfsson, Rolf; Nilsson, Lars-Göran

    2015-01-01

    Background Exposure to ambient air pollution is suspected to cause cognitive effects, but a prospective cohort is needed to study exposure to air pollution at the home address and the incidence of dementia. Objectives We aimed to assess the association between long-term exposure to traffic-related air pollution and dementia incidence in a major city in northern Sweden. Methods Data on dementia incidence over a 15-year period were obtained from the longitudinal Betula study. Traffic air pollution exposure was assessed using a land-use regression model with a spatial resolution of 50 m × 50 m. Annual mean nitrogen oxide levels at the residential address of the participants at baseline (the start of follow-up) were used as markers for long-term exposure to air pollution. Results Out of 1,806 participants at baseline, 191 were diagnosed with Alzheimer’s disease during follow-up, and 111 were diagnosed with vascular dementia. Participants in the group with the highest exposure were more likely than those in the group with the lowest exposure to be diagnosed with dementia (Alzheimer’s disease or vascular dementia), with a hazard ratio (HR) of 1.43 (95% CI: 0.998, 2.05 for the highest vs. the lowest quartile). The estimates were similar for Alzheimer’s disease (HR 1.38) and vascular dementia (HR 1.47). The HR for dementia associated with the third quartile versus the lowest quartile was 1.48 (95% CI: 1.03, 2.11). A subanalysis that excluded a younger sample that had been retested after only 5 years of follow-up suggested stronger associations with exposure than were present in the full cohort (HR = 1.71; 95% CI: 1.08, 2.73 for the highest vs. the lowest quartile). Conclusions If the associations we observed are causal, then air pollution from traffic might be an important risk factor for vascular dementia and Alzheimer’s disease. Citation Oudin A, Forsberg B, Nordin Adolfsson A, Lind N, Modig L, Nordin M, Nordin S, Adolfsson R, Nilsson LG. 2016. Traffic

  1. In-Beam Studies of High-Spin States in Mercury -183 and MERCURY-181

    NASA Astrophysics Data System (ADS)

    Shi, Detang

    The high-spin states of ^{183 }Hg were studied by using the reaction ^{155}Gd(^{32}S, 4n)^{183}Hg at a beam energy of 160 MeV with the tandem-linac accelerator system and the multi-element gamma-ray detection array at Florida State University. Two new bands, consisting of stretched E2 transitions and connected by M1 inter-band transitions, were identified in ^{183}Hg. Several new levels were added to the previously known bands at higher spin. The spins and parities to the levels in ^{183}Hg were determined from the analysis of their DCO ratios and B(M1)/B(E2) ratios. While the two pairs of previously known bands in ^ {183}Hg were proposed to 7/2^ -[514] and 9/2^+ [624], the two new bands are assigned as the 1/2^-[521] ground state configuration based upon the systematics of Nilsson orbitals in this mass region. The 354-keV transition previously was considered to be an E2 transition and assigned as the only transition from a band which is built on an oblate deformed i_{13/2} isomeric state. However, our DCO ratio analysis indicates that the 354-keV gamma-ray is an M1 transition. This changes the decay pattern of the 9/2^+[624 ] prolate structure in ^ {183}Hg, so it is seen to feed only into the i_{13/2} isomer band head. Our knowledge of the mercury nuclei far from stability was then extended through an in-beam study of the reaction ^{144}Sm(^{40 }Ar, 3n)^{181}Hg by using the Fragment Mass Analyzer (FMA) and the ten-Compton-suppressed -germanium-detector system at Argonne National Laboratory. Band structures to high-spin states are established for the first time in ^{181}Hg in the present experiment. The observed level structure of ^{181}Hg is midway between those in ^{185}Hg and in ^{183}Hg. The experimental results are analyzed in the framework of the cranking shell model (CSM). Alternative theoretical explanations are also presented and discussed. Systematics of neighboring mercury isotopes and N = 103 isotones is analyzed.

  2. Agent-based modeling: case study in cleavage furrow models

    PubMed Central

    Mogilner, Alex; Manhart, Angelika

    2016-01-01

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328

  3. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  4. Groundwater modelling in conceptual hydrological models - introducing space

    NASA Astrophysics Data System (ADS)

    Boje, Søren; Skaugen, Thomas; Møen, Knut; Myrabø, Steinar

    2017-04-01

    The tiny Sæternbekken Minifelt (Muren) catchment (7500 m2) in Bærumsmarka, Norway, was during the 1990s, densely instrumented with more than a 100 observation points for measuring groundwater levels. The aim was to investigate the link between shallow groundwater dynamics and runoff. The DDD (Distance Distribution Dynamics) model is a newly developed rainfall-runoff model used operationally by the Norwegian Flood-Forecasting service at NVE. The model estimates the capacity of the subsurface reservoir at different levels of saturation and predicts overland flow. The subsurface in the DDD model has a 2-D representation that calculates the saturated and unsaturated soil moisture along a hillslope representing the entire catchment in question. The groundwater observations from more than two decades ago are used to verify assumptions of the subsurface reservoir in the DDD model and to validate its spatial representation of the subsurface reservoir. The Muren catchment will, during 2017, be re-instrumented in order to continue the work to bridge the gap between conceptual hydrological models, with typically single value or 0-dimension representation of the subsurface, and models with more realistic 2- or 3-dimension representation of the subsurface.

  5. Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum

    2011-01-01

    Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…

  6. BioModels: expanding horizons to include more modelling approaches and formats

    PubMed Central

    Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi

    2018-01-01

    Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614

  7. Is the Voter Model a Model for Voters?

    NASA Astrophysics Data System (ADS)

    Fernández-Gracia, Juan; Suchecki, Krzysztof; Ramasco, José J.; San Miguel, Maxi; Eguíluz, Víctor M.

    2014-04-01

    The voter model has been studied extensively as a paradigmatic opinion dynamics model. However, its ability to model real opinion dynamics has not been addressed. We introduce a noisy voter model (accounting for social influence) with recurrent mobility of agents (as a proxy for social context), where the spatial and population diversity are taken as inputs to the model. We show that the dynamics can be described as a noisy diffusive process that contains the proper anisotropic coupling topology given by population and mobility heterogeneity. The model captures statistical features of U.S. presidential elections as the stationary vote-share fluctuations across counties and the long-range spatial correlations that decay logarithmically with the distance. Furthermore, it recovers the behavior of these properties when the geographical space is coarse grained at different scales—from the county level through congressional districts, and up to states. Finally, we analyze the role of the mobility range and the randomness in decision making, which are consistent with the empirical observations.

  8. Sequence modelling and an extensible data model for genomic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peter Wei-Der

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less

  9. Sequence modelling and an extensible data model for genomic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peter Wei-Der

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less

  10. Log-Multiplicative Association Models as Item Response Models

    ERIC Educational Resources Information Center

    Anderson, Carolyn J.; Yu, Hsiu-Ting

    2007-01-01

    Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…

  11. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  12. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of amore » two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.« less

  13. Dynamic Emulation Modelling (DEMo) of large physically-based environmental models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2012-12-01

    In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the

  14. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  15. Models of ovarian cancer metastasis: Murine models

    PubMed Central

    Šale, Sanja; Orsulic, Sandra

    2008-01-01

    Mice have mainly been used in ovarian cancer research as immunodeficient hosts for cell lines derived from the primary tumors and ascites of ovarian cancer patients. These xenograft models have provided a valuable system for pre-clinical trials, however, the genetic complexity of human tumors has precluded the understanding of key events that drive metastatic dissemination. Recently developed immunocompetent, genetically defined mouse models of epithelial ovarian cancer represent significant improvements in the modeling of metastatic disease. PMID:19337569

  16. Addressing Hydro-economic Modeling Limitations - A Limited Foresight Sacramento Valley Model and an Open-source Modeling Platform

    NASA Astrophysics Data System (ADS)

    Harou, J. J.; Hansen, K. M.

    2008-12-01

    Increased scarcity of world water resources is inevitable given the limited supply and increased human pressures. The idea that "some scarcity is optimal" must be accepted for rational resource use and infrastructure management decisions to be made. Hydro-economic systems models are unique at representing the overlap of economic drivers, socio-political forces and distributed water resource systems. They demonstrate the tangible benefits of cooperation and integrated flexible system management. Further improvement of models, quality control practices and software will be needed for these academic policy tools to become accepted into mainstream water resource practice. Promising features include: calibration methods, limited foresight optimization formulations, linked simulation-optimization approaches (e.g. embedding pre-existing calibrated simulation models), spatial groundwater models, stream-aquifer interactions and stream routing, etc.. Conventional user-friendly decision support systems helped spread simulation models on a massive scale. Hydro-economic models must also find a means to facilitate construction, distribution and use. Some of these issues and model features are illustrated with a hydro-economic optimization model of the Sacramento Valley. Carry-over storage value functions are used to limit hydrologic foresight of the multi- period optimization model. Pumping costs are included in the formulation by tracking regional piezometric head of groundwater sub-basins. To help build and maintain this type of network model, an open-source water management modeling software platform is described and initial project work is discussed. The objective is to generically facilitate the connection of models, such as those developed in a modeling environment (GAMS, MatLab, Octave, "), to a geographic user interface (drag and drop node-link network) and a database (topology, parameters and time series). These features aim to incrementally move hydro- economic models

  17. JEDI International Model | Jobs and Economic Development Impact Models |

    Science.gov Websites

    NREL International Model JEDI International Model The Jobs and Economic Development Impacts (JEDI) International Model allows users to estimate economic development impacts from international

  18. Modeling of the Global Water Cycle - Analytical Models

    Treesearch

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  19. Seven challenges for metapopulation models of epidemics, including households models.

    PubMed

    Ball, Frank; Britton, Tom; House, Thomas; Isham, Valerie; Mollison, Denis; Pellis, Lorenzo; Scalia Tomba, Gianpaolo

    2015-03-01

    This paper considers metapopulation models in the general sense, i.e. where the population is partitioned into sub-populations (groups, patches,...), irrespective of the biological interpretation they have, e.g. spatially segregated large sub-populations, small households or hosts themselves modelled as populations of pathogens. This framework has traditionally provided an attractive approach to incorporating more realistic contact structure into epidemic models, since it often preserves analytic tractability (in stochastic as well as deterministic models) but also captures the most salient structural inhomogeneity in contact patterns in many applied contexts. Despite the progress that has been made in both the theory and application of such metapopulation models, we present here several major challenges that remain for future work, focusing on models that, in contrast to agent-based ones, are amenable to mathematical analysis. The challenges range from clarifying the usefulness of systems of weakly-coupled large sub-populations in modelling the spread of specific diseases to developing a theory for endemic models with household structure. They include also developing inferential methods for data on the emerging phase of epidemics, extending metapopulation models to more complex forms of human social structure, developing metapopulation models to reflect spatial population structure, developing computationally efficient methods for calculating key epidemiological model quantities, and integrating within- and between-host dynamics in models. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  20. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  1. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  2. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification

    PubMed Central

    Sager, Jennifer E.; Yu, Jingjing; Ragueneau-Majlessi, Isabelle

    2015-01-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms “PBPK” and “physiologically based pharmacokinetic model” to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines. PMID:26296709

  3. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of

  4. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  5. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  6. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  7. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  8. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  9. IHY Modeling Support at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.

  10. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  11. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  12. Modeling of batch sorber system: kinetic, mechanistic, and thermodynamic modeling

    NASA Astrophysics Data System (ADS)

    Mishra, Vishal

    2017-10-01

    The present investigation has dealt with the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase. Various rate models were evaluated to elucidate the kinetics of copper and zinc biosorptions, and the results indicated that the pseudo-second-order model was more appropriate than the pseudo-first-order model. The curve of the initial sorption rate versus the initial concentration of copper and zinc ions also complemented the results of the pseudo-second-order model. Models used for the mechanistic modeling were the intra-particle model of pore diffusion and Bangham's model of film diffusion. The results of the mechanistic modeling together with the values of pore and film diffusivities indicated that the preferential mode of the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase was film diffusion. The results of the intra-particle model showed that the biosorption of the copper and zinc ions was not dominated by the pore diffusion, which was due to macro-pores with open-void spaces present on the surface of egg-shell particles. The thermodynamic modeling reproduced the fact that the sorption of copper and zinc was spontaneous, exothermic with the increased order of the randomness at the solid-liquid interface.

  13. SWIFT MODELLER: a Java based GUI for molecular modeling.

    PubMed

    Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S

    2011-10-01

    MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.

  14. Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcoholsmore » and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.« less

  15. Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread

    PubMed Central

    Miller, Joel C.; Volz, Erik M.

    2012-01-01

    We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242

  16. The Relationships between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    ERIC Educational Resources Information Center

    Mendonça, Paula Cristina Cardoso; Justi, Rosária

    2013-01-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities…

  17. Multiscale Modeling of Structurally-Graded Materials Using Discrete Dislocation Plasticity Models and Continuum Crystal Plasticity Models

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.

    2012-01-01

    A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.

  18. Modeling influenza-like illnesses through composite compartmental models

    NASA Astrophysics Data System (ADS)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  19. Continuous system modeling

    NASA Technical Reports Server (NTRS)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  20. Agent-based modeling and systems dynamics model reproduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Macal, C. M.

    2009-01-01

    Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.

  1. National Transonic Facility model and model support vibration problems

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Popernack, Thomas G., Jr.; Gloss, Blair B.

    1990-01-01

    Vibrations of models and model support system were encountered during testing in the National Transonic Facility. Model support system yaw plane vibrations have resulted in model strain gage balance design load limits being reached. These high levels of vibrations resulted in limited aerodynamic testing for several wind tunnel models. The yaw vibration problem was the subject of an intensive experimental and analytical investigation which identified the primary source of the yaw excitation and resulted in attenuation of the yaw oscillations to acceptable levels. This paper presents the principal results of analyses and experimental investigation of the yaw plane vibration problems. Also, an overview of plans for development and installation of a permanent model system dynamic and aeroelastic response measurement and monitoring system for the National Transonic Facility is presented.

  2. An improved interfacial bonding model for material interface modeling

    PubMed Central

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  3. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  4. Coupled atmosphere-biophysics-hydrology models for environmental modeling

    USGS Publications Warehouse

    Walko, R.L.; Band, L.E.; Baron, Jill S.; Kittel, T.G.F.; Lammers, R.; Lee, T.J.; Ojima, D.; Pielke, R.A.; Taylor, C.; Tague, C.; Tremback, C.J.; Vidale, P.L.

    2000-01-01

    The formulation and implementation of LEAF-2, the Land Ecosystem–Atmosphere Feedback model, which comprises the representation of land–surface processes in the Regional Atmospheric Modeling System (RAMS), is described. LEAF-2 is a prognostic model for the temperature and water content of soil, snow cover, vegetation, and canopy air, and includes turbulent and radiative exchanges between these components and with the atmosphere. Subdivision of a RAMS surface grid cell into multiple areas of distinct land-use types is allowed, with each subgrid area, or patch, containing its own LEAF-2 model, and each patch interacts with the overlying atmospheric column with a weight proportional to its fractional area in the grid cell. A description is also given of TOPMODEL, a land hydrology model that represents surface and subsurface downslope lateral transport of groundwater. Details of the incorporation of a modified form of TOPMODEL into LEAF-2 are presented. Sensitivity tests of the coupled system are presented that demonstrate the potential importance of the patch representation and of lateral water transport in idealized model simulations. Independent studies that have applied LEAF-2 and verified its performance against observational data are cited. Linkage of RAMS and TOPMODEL through LEAF-2 creates a modeling system that can be used to explore the coupled atmosphere–biophysical–hydrologic response to altered climate forcing at local watershed and regional basin scales.

  5. Modeling fractal cities using the correlated percolation model.

    NASA Astrophysics Data System (ADS)

    Makse, Hernán A.; Havlin, Shlomo; Stanley, H. Eugene

    1996-03-01

    Cities grow in a way that might be expected to resemble the growth of two-dimensional aggregates of particles, and this has led to recent attempts to model urban growth using ideas from the statistical physics of clusters. In particular, the model of diffusion limited aggregation (DLA) has been invoked to rationalize the apparently fractal nature of urban morphologies(M. Batty and P. Longley, Fractal Cities) (Academic, San Diego, 1994). The DLA model predicts that there should exist only one large fractal cluster, which is almost perfectly screened from incoming 'development units' (representing, for example, people, capital or resources), so that almost all of the cluster growth takes place at the tips of the cluster's branches. We show that an alternative model(H. A. Makse, S. Havlin, H. E. Stanley, Nature 377), 608 (1995), in which development units are correlated rather than being added to the cluster at random, is better able to reproduce the observed morphology of cities and the area distribution of sub-clusters ('towns') in an urban system, and can also describe urban growth dynamics. Our physical model, which corresponds to the correlated percolation model in the presence of a density gradient, is motivated by the fact that in urban areas development attracts further development. The model offers the possibility of predicting the global properties (such as scaling behavior) of urban morphologies.

  6. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    ERIC Educational Resources Information Center

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  7. Phoenix model

    EPA Science Inventory

    Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...

  8. BioModels Database: a repository of mathematical models of biological processes.

    PubMed

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  9. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014

  10. Orbital Debris Modeling

    NASA Technical Reports Server (NTRS)

    Liou, J. C.

    2012-01-01

    Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)

  11. Combining abiotic and biotic models - Hydraulical modeling to fill the gap between catchment and hydro-dynamic models

    NASA Astrophysics Data System (ADS)

    Guse, B.; Sulc, D.; Schmalz, B.; Fohrer, N.

    2012-04-01

    The European Water Framework Directive (WFD) requires a catchment-based approach, which is assessed in the IMPACT project by combining abiotic and biotic models. The core point of IMPACT is a model chain (catchment model -> 1-D-hydraulic model -> 3-D-hydro-morphodynamic model -> biotic habitat model) with the aim to estimate the occurrence of the target species of the WFD. Firstly, the model chain is developed for the current land use and climate conditions. Secondly, land use and climate change scenarios are developed at the catchment scale. The outputs of the catchment model for the scenarios are used as input for the next models within the model chain to estimate the effect of these changes on the target species. The eco-hydrological catchment model SWAT is applied for the Treene catchment in Northern Germany and delivers discharge and water quality parameters as a spatial explicit output for each subbasin. There is no water level information given by SWAT. However, water level values are needed as lower boundary condition for the hydro-dynamic and habitat models which are applied for the 300 m candidate reference reach. In order to fill the gap between the catchment and the hydro-morphodynamic model, the 1-D hydraulic model HEC-RAS is applied for a 3 km long reach transect from the next upstream hydrological station until the upper bound of the candidate study reach. The channel geometry for HEC-RAS was estimated based on 96 cross-sections which were measured in the IMPACT project. By using available discharge and water level measurements from the hydrological station and own flow velocity measurements, the channel resistence was estimated. HEC-RAS was run with different statistical indices (mean annual drought, mean discharge, …) for steady flow conditions. The rating curve was then constructed for the target cross-section, i.e. the lower bound of the candidate study reach, to fulfill the combining with the hydro- and morphodynamic models. These statistical

  12. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  13. Model verification of large structural systems. [space shuttle model response

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1978-01-01

    A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.

  14. Modeling Operations Other Than War: Non-Combatants in Combat Modeling

    DTIC Science & Technology

    1994-09-01

    supposition that non-combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . The model...combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . Thi- model also includes...numerical example demonstrated that the model appeared to perform in an acceptable manner, in that it produced output within a reasonable range. During the

  15. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, James C.

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  16. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  17. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    NASA Astrophysics Data System (ADS)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  18. Pharmacokinetic modeling in aquatic animals. 1. Models and concepts

    USGS Publications Warehouse

    Barron, M.G.; Stehly, Guy R.; Hayton, W.L.

    1990-01-01

    While clinical and toxicological applications of pharmacokinetics have continued to evolve both conceptually and experimentally, pharmacokinetics modeling in aquatic animals has not progressed accordingly. In this paper we present methods and concepts of pharmacokinetic modeling in aquatic animals using multicompartmental, clearance-based, non-compartmental and physiologically-based pharmacokinetic models. These models should be considered as alternatives to traditional approaches, which assume that the animal acts as a single homogeneous compartment based on apparent monoexponential elimination.

  19. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  20. A Generic Modeling Process to Support Functional Fault Model Development

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  1. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    NASA Technical Reports Server (NTRS)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  2. REGIONAL PARTICULATE MODEL - 1. MODEL DESCRIPTION AND PRELIMINARY RESULTS

    EPA Science Inventory

    The gas-phase chemistry and transport mechanisms of the Regional Acid Deposition Model have been modified to create the Regional Particulate Model, a three-dimensional Eulerian model that simulates the chemistry, transport, and dynamics of sulfuric acid aerosol resulting from pri...

  3. Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis

    DTIC Science & Technology

    2017-02-01

    Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning

  4. WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  5. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    NASA Astrophysics Data System (ADS)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of

  6. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  7. Modelling Complex Fenestration Systems using physical and virtual models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanachareonkit, Anothai; Scartezzini, Jean-Louis

    2010-04-15

    Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25

  8. The Trimeric Model: A New Model of Periodontal Treatment Planning

    PubMed Central

    Tarakji, Bassel

    2014-01-01

    Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662

  9. Modelling total solar irradiance using a flux transport model

    NASA Astrophysics Data System (ADS)

    Dasi Espuig, Maria; Jiang, Jie; Krivova, Natalie; Solanki, Sami

    2014-05-01

    Reconstructions of solar irradiance into the past are of considerable interest for studies of solar influence on climate. Models based on the assumption that irradiance changes are caused by the evolution of the photospheric magnetic field have been the most successful in reproducing the measured irradiance variations. Our SATIRE-S model is one of these. It uses solar full-disc magnetograms as an input, and these are available for less than four decades. Thus, to reconstruct the irradiance back to times when no observed magnetograms are available, we combine the SATIRE-S model with synthetic magnetograms, produced using a surface flux transport model. The model is fed with daily, observed or modelled statistically, records of sunspot positions, areas, and tilt angles. To describe the secular change in the irradiance, we used the concept of overlapping ephemeral region cycles. With this technique TSI can be reconstructed back to 1700.

  10. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  11. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  12. Evaluating Conceptual Site Models with Multicomponent Reactive Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Heffner, D.; Price, V.; Temples, T. J.; Nicholson, T. J.

    2005-05-01

    Modeling ground-water flow and multicomponent reactive chemical transport is a useful approach for testing conceptual site models and assessing the design of monitoring networks. A graded approach with three conceptual site models is presented here with a field case of tetrachloroethene (PCE) transport and biodegradation near Charleston, SC. The first model assumed a one-layer homogeneous aquifer structure with semi-infinite boundary conditions, in which an analytical solution of the reactive solute transport can be obtained with BIOCHLOR (Aziz et al., 1999). Due to the over-simplification of the aquifer structure, this simulation cannot reproduce the monitoring data. In the second approach we used GMS to develop the conceptual site model, a layer-cake multi-aquifer system, and applied a numerical module (MODFLOW and RT3D within GMS) to solve the flow and reactive transport problem. The results were better than the first approach but still did not fit the plume well because the geological structures were still inadequately defined. In the third approach we developed a complex conceptual site model by interpreting log and seismic survey data with Petra and PetraSeis. We detected a major channel and a younger channel, through the PCE source area. These channels control the local ground-water flow direction and provide a preferential chemical transport pathway. Results using the third conceptual site model agree well with the monitoring concentration data. This study confirms that the bias and uncertainty from inadequate conceptual models are much larger than those introduced from an inadequate choice of model parameter values (Neuman and Wierenga, 2003; Meyer et al., 2004). Numerical modeling in this case provides key insight into the hydrogeology and geochemistry of the field site for predicting contaminant transport in the future. Finally, critical monitoring points and performance indicator parameters are selected for future monitoring to confirm system

  13. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  14. Selected aspects of modelling monetary transmission mechanism by BVAR model

    NASA Astrophysics Data System (ADS)

    Vaněk, Tomáš; Dobešová, Anna; Hampel, David

    2013-10-01

    In this paper we use the BVAR model with the specifically defined prior to evaluate data including high-lag dependencies. The results are compared to both restricted and common VAR model. The data depicts the monetary transmission mechanism in the Czech Republic and Slovakia from January 2002 to February 2013. The results point to the inadequacy of the common VAR model. The restricted VAR model and the BVAR model appear to be similar in the sense of impulse responses.

  15. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    PubMed Central

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  16. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    PubMed

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  17. EPA EXPOSURE MODELS LIBRARY AND INTEGRATED MODEL EVALUATION SYSTEM

    EPA Science Inventory

    The third edition of the U.S. Environmental Protection Agencys (EPA) EML/IMES (Exposure Models Library and Integrated Model Evaluation System) on CD-ROM is now available. The purpose of the disc is to provide a compact and efficient means to distribute exposure models, documentat...

  18. Modelling the Shuttle Remote Manipulator System: Another flexible model

    NASA Technical Reports Server (NTRS)

    Barhorst, Alan A.

    1993-01-01

    High fidelity elastic system modeling algorithms are discussed. The particular system studied is the Space Shuttle Remote Manipulator System (RMS) undergoing full articulated motion. The model incorporates flexibility via a methodology the author has been developing. The technique is based in variational principles, so rigorous boundary condition generation and weak formulations for the associated partial differential equations are realized, yet the analyst need not integrate by parts. The methodology is formulated using vector-dyad notation with minimal use of tensor notation, therefore the technique is believed to be affable to practicing engineers. The objectives of this work are as follows: (1) determine the efficacy of the modeling method; and (2) determine if the method affords an analyst advantages in the overall modeling and simulation task. Generated out of necessity were Mathematica algorithms that quasi-automate the modeling procedure and simulation development. The project was divided into sections as follows: (1) model development of a simplified manipulator; (2) model development of the full-freedom RMS including a flexible movable base on a six degree of freedom orbiter (a rigid-body is attached to the manipulator end-effector); (3) simulation development for item 2; and (4) comparison to the currently used model of the flexible RMS in the Structures and Mechanics Division of NASA JSC. At the time of the writing of this report, items 3 and 4 above were not complete.

  19. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  20. Generic magnetohydrodynamic model at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Honkonen, I. J.; Rastaetter, L.; Glocer, A.

    2016-12-01

    The Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center is a multi-agency partnership to enable, support and perform research and development for next-generation space science and space weather models. CCMC currently hosts nearly 100 numerical models and a cornerstone of this activity is the Runs on Request (RoR) system which allows anyone to request a model run and analyse/visualize the results via a web browser. CCMC is also active in the education community by organizing student research contests, heliophysics summer schools, and space weather forecaster training for students, government and industry representatives. Recently a generic magnetohydrodynamic (MHD) model was added to the CCMC RoR system which allows the study of a variety of fluid and plasma phenomena in one, two and three dimensions using a dynamic point-and-click web interface. For example students can experiment with the physics of fundamental wave modes of hydrodynamic and MHD theory, behavior of discontinuities and shocks as well as instabilities such as Kelvin-Helmholtz.Students can also use the model to experiments with numerical effects of models, i.e. how the process of discretizing a system of equations and solving them on a computer changes the solution. This can provide valuable background understanding e.g. for space weather forecasters on the effects of model resolution, numerical resistivity, etc. on the prediction.

  1. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  2. Constraints Modeling in FRBR Data Model Using OCL

    NASA Astrophysics Data System (ADS)

    Rudić, Gordana

    2011-09-01

    Transformation of the conceptual FRBR data model to the class diagram in UML 2.0 notation is given. The class diagram is formed using MagicDraw CASE tool. The paper presents a class diagram for the first group of FRBR entities ie. classes (the product of intellectual or artistic endeavour). It is demonstrated how to model constraints over relationships between classes in FRBR object data model using OCL 2.0.

  3. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  4. The CAFE model: A net production model for global ocean phytoplankton

    NASA Astrophysics Data System (ADS)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  5. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  6. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  7. Modeling Ni-Cd performance. Planned alterations to the Goddard battery model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1986-01-01

    The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.

  8. A Test of Maxwell's Z Model Using Inverse Modeling

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, T.

    2003-01-01

    In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.

  9. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  10. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  11. Forest-fire models

    Treesearch

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  12. SPITFIRE within the MPI Earth system model: Model development and evaluation

    NASA Astrophysics Data System (ADS)

    Lasslop, Gitta; Thonicke, Kirsten; Kloster, Silvia

    2014-09-01

    Quantification of the role of fire within the Earth system requires an adequate representation of fire as a climate-controlled process within an Earth system model. To be able to address questions on the interaction between fire and the Earth system, we implemented the mechanistic fire model SPITFIRE, in JSBACH, the land surface model of the MPI Earth system model. Here, we document the model implementation as well as model modifications. We evaluate our model results by comparing the simulation to the GFED version 3 satellite-based data set. In addition, we assess the sensitivity of the model to the meteorological forcing and to the spatial variability of a number of fire relevant model parameters. A first comparison of model results with burned area observations showed a strong correlation of the residuals with wind speed. Further analysis revealed that the response of the fire spread to wind speed was too strong for the application on global scale. Therefore, we developed an improved parametrization to account for this effect. The evaluation of the improved model shows that the model is able to capture the global gradients and the seasonality of burned area. Some areas of model-data mismatch can be explained by differences in vegetation cover compared to observations. We achieve benchmarking scores comparable to other state-of-the-art fire models. The global total burned area is sensitive to the meteorological forcing. Adjustment of parameters leads to similar model results for both forcing data sets with respect to spatial and seasonal patterns. This article was corrected on 29 SEP 2014. See the end of the full text for details.

  13. Modeling Heterogeneous Variance-Covariance Components in Two-Level Models

    ERIC Educational Resources Information Center

    Leckie, George; French, Robert; Charlton, Chris; Browne, William

    2014-01-01

    Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…

  14. Measurement Model Specification Error in LISREL Structural Equation Models.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  15. Radiation Models

    ERIC Educational Resources Information Center

    James, W. G. G.

    1970-01-01

    Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)

  16. Agent-based modeling: case study in cleavage furrow models.

    PubMed

    Mogilner, Alex; Manhart, Angelika

    2016-11-07

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as "differential equation based" (DE) or "agent based" (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem-positioning of the cleavage furrow in dividing cells-to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. © 2016 Mogilner and Manhart. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  17. Communication system modeling

    NASA Technical Reports Server (NTRS)

    Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.

    1971-01-01

    This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.

  18. Modeling and control design of a wind tunnel model support

    NASA Technical Reports Server (NTRS)

    Howe, David A.

    1990-01-01

    The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.

  19. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  20. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    NASA Astrophysics Data System (ADS)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  1. Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective

    USGS Publications Warehouse

    Barker, Richard J.; Link, William A.

    2015-01-01

    Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.

  2. Pseudo-Boltzmann model for modeling the junctionless transistors

    NASA Astrophysics Data System (ADS)

    Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.

    2014-05-01

    Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.

  3. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  4. Model selection for multi-component frailty models.

    PubMed

    Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert

    2007-11-20

    Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.

  5. Aerosol Modeling for the Global Model Initiative

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.

    2001-01-01

    The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.

  6. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    PubMed

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. CMIP5 Historical Simulations (1850-2012) with GISS ModelE2

    NASA Technical Reports Server (NTRS)

    Miller, Ronald Lindsay; Schmidt, Gavin A.; Nazarenko, Larissa S.; Tausnev, Nick; Bauer, Susanne E.; DelGenio, Anthony D.; Kelley, Max; Lo, Ken K.; Ruedy, Reto; Shindell, Drew T.; hide

    2014-01-01

    Observations of climate change during the CMIP5 extended historical period (1850-2012) are compared to trends simulated by six versions of the NASA Goddard Institute for Space Studies ModelE2 Earth System Model. The six models are constructed from three versions of the ModelE2 atmospheric general circulation model, distinguished by their treatment of atmospheric composition and the aerosol indirect effect, combined with two ocean general circulation models, HYCOM and Russell. Forcings that perturb the model climate during the historical period are described. Five-member ensemble averages from each of the six versions of ModelE2 simulate trends of surface air temperature, atmospheric temperature, sea ice and ocean heat content that are in general agreement with observed trends, although simulated warming is slightly excessive within the past decade. Only simulations that include increasing concentrations of long-lived greenhouse gases match the warming observed during the twentieth century. Differences in twentieth-century warming among the six model versions can be attributed to differences in climate sensitivity, aerosol and ozone forcing, and heat uptake by the deep ocean. Coupled models with HYCOM export less heat to the deep ocean, associated with reduced surface warming in regions of deepwater formation, but greater warming elsewhere at high latitudes along with reduced sea ice. All ensembles show twentieth-century annular trends toward reduced surface pressure at southern high latitudes and a poleward shift of the midlatitude westerlies, consistent with observations.

  8. Building generic anatomical models using virtual model cutting and iterative registration.

    PubMed

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java

  9. Human Exposure Modeling - Databases to Support Exposure Modeling

    EPA Pesticide Factsheets

    Human exposure modeling relates pollutant concentrations in the larger environmental media to pollutant concentrations in the immediate exposure media. The models described here are available on other EPA websites.

  10. A prototype computer-aided modelling tool for life-support system models

    NASA Technical Reports Server (NTRS)

    Preisig, H. A.; Lee, Tae-Yeong; Little, Frank

    1990-01-01

    Based on the canonical decomposition of physical-chemical-biological systems, a prototype kernel has been developed to efficiently model alternative life-support systems. It supports (1) the work in an interdisciplinary group through an easy-to-use mostly graphical interface, (2) modularized object-oriented model representation, (3) reuse of models, (4) inheritance of structures from model object to model object, and (5) model data base. The kernel is implemented in Modula-II and presently operates on an IBM PC.

  11. Coupling population dynamics with earth system models: the POPEM model.

    PubMed

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  12. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    PubMed

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  13. Models for nearly every occasion: Part I - One box models.

    PubMed

    Hewett, Paul; Ganser, Gary H

    2017-01-01

    The standard "well mixed room," "one box" model cannot be used to predict occupational exposures whenever the scenario involves the use of local controls. New "constant emission" one box models are proposed that permit either local exhaust or local exhaust with filtered return, coupled with general room ventilation or the recirculation of a portion of the general room exhaust. New "two box" models are presented in Part II of this series. Both steady state and transient models were developed. The steady state equation for each model, including the standard one box steady state model, is augmented with an additional factor reflecting the fraction of time the substance was generated during each task. This addition allows the easy calculation of the average exposure for cyclic and irregular emission patterns, provided the starting and ending concentrations are zero or near zero, or the cumulative time across all tasks is long (e.g., several tasks to a full shift). The new models introduce additional variables, such as the efficiency of the local exhaust to immediately capture freshly generated contaminant and the filtration efficiency whenever filtered exhaust is returned to the workspace. Many of the model variables are knowable (e.g., room volume and ventilation rate). A structured procedure for calibrating a model to a work scenario is introduced that can be applied to both continuous and cyclic processes. The "calibration" procedure generates estimates of the generation rate and all of remaining unknown model variables.

  14. The reservoir model: a differential equation model of psychological regulation.

    PubMed

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  15. The Reservoir Model: A Differential Equation Model of Psychological Regulation

    PubMed Central

    Deboeck, Pascal R.; Bergeman, C. S.

    2017-01-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might “add up” over time (e.g., life stressors, inputs), but individuals simultaneously take action to “blow off steam” (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the “height” (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. PMID:23527605

  16. Contam airflow models of three large buildings: Model descriptions and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Douglas R.; Price, Phillip N.

    2009-09-30

    Airflow and pollutant transport models are useful for several reasons, including protection from or response to biological terrorism. In recent years they have been used for deciding how many biological agent samplers are needed in a given building to detect the release of an agent; to figure out where those samplers should be located; to predict the number of people at risk in the event of a release of a given size and location; to devise response strategies in the event of a release; to determine optimal trade-offs between sampler characteristics (such as detection limit and response time); and somore » on. For some of these purposes it is necessary to model a specific building of interest: if you are trying to determine optimal sampling locations, you must have a model of your building and not some different building. But for many purposes generic or 'prototypical' building models would suffice. For example, for determining trade-offs between sampler characteristics, results from one building will carry over other, similar buildings. Prototypical building models are also useful for comparing or testing different algorithms or computational pproaches: different researchers can use the same models, thus allowing direct comparison of results in a way that is not otherwise possible. This document discusses prototypical building models developed by the Airflow and Pollutant Transport Group at Lawrence Berkeley National Laboratory. The models are implemented in the Contam v2.4c modeling program, available from the National Institutes for Standards and Technology. We present Contam airflow models of three virtual buildings: a convention center, an airport terminal, and a multi-story office building. All of the models are based to some extent on specific real buildings. Our goal is to produce models that are realistic, in terms of approximate magnitudes, directions, and speeds of airflow and pollutant transport. The three models vary substantially in detail. The

  17. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  18. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  19. Modeling Ability Differentiation in the Second-Order Factor Model

    ERIC Educational Resources Information Center

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  20. Data-Model and Inter-Model Comparisons of the GEM Outflow Events Using the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Eccles, J. V.; Barakat, A. R.; Kistler, L. M.; Haaland, S.; Schunk, R. W.; Chappell, C. R.

    2015-12-01

    Two storm periods were selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage: the September 27 - October 4, 2002 corotating interaction region event and the October 22 - 29 coronal mass ejection event. During both events, the FAST, Polar, Cluster, and other missions made key observations, creating prime periods for data-model comparison. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of these important periods compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Emphasis will be given to the second event. Density and velocity of oxygen and hydrogen throughout the lobes, plasma sheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. Inter-model comparisons will illustrate how the different outflow specifications affect the magnetosphere. Specifically, it is found that the GPW provides increased heavy ion outflow over a broader spatial range than the alternative

  1. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  2. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    NASA Astrophysics Data System (ADS)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  3. Hierarchical spatial capture-recapture models: Modeling population density from stratified populations

    USGS Publications Warehouse

    Royle, J. Andrew; Converse, Sarah J.

    2014-01-01

    Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.

  4. Validation of community models: 3. Tracing field lines in heliospheric models

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter; Elliott, Brian; Acebal, Ariel

    2011-10-01

    Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.

  5. Bio-Inspired Neural Model for Learning Dynamic Models

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  6. Formalized landscape models for surveying and modelling tasks

    NASA Astrophysics Data System (ADS)

    Löwner, Marc-Oliver

    2010-05-01

    We present a formalization of main geomorphic landscape models, mainly the concept of slopes, to clarify the needs and potentials of surveying technologies and modelling approaches. Using the Unified Modelling Language (UML) it is implemented as a exchangeable Geography Markup Language (GML3) -based application schema and therefore supports shared measurement campaigns. Today, knowledge in Geomorphology is given synoptically in textbooks in a more or less lyrical way. This knowledge is hard to implement for the use of modelling algorithms or data storage and sharing questions. On the other hand physical based numerical modelling and high resolution surveying technologies enable us to investigate case scenarios within small scales. Bringing together such approaches and organizing our data in an appropriate way will need the formalization of the concepts and knowledge that is archived in the science of geomorphology. The main problem of comparing research results in geomorphology but is that the objects under investigation are composed of 3-dimensional geometries that change in time due to processes of material fluxes, e. g. soil erosion or mass movements. They have internal properties, e. g. soil texture or bulk density, that determine the effectiveness of these processes but are under change as well. The presented application schema is available on the Internet and therefore a first step to enable researchers to share information using an OGC's Web feature service. In this vein comparing modelling results of landscape evolution with results of other scientist's observations is possible. Compared to prevalent data concepts the model presented makes it possible to store information about landforms, their geometry and the characteristics in more detail. It allows to represent the 3D-geometry, the set of material properties and the genesis of a landform by associating processes to a geoobject. Thus, time slices of a geomorphic system can be represented as well as

  7. JEDI Geothermal Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Geothermal Model JEDI Geothermal Model The Jobs and Economic Development Impacts (JEDI) Geothermal Model allows users to estimate economic development impacts from geothermal projects and includes

  8. Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter

    2016-01-01

    This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.

  9. Applications of the k – ω Model in Stellar Evolutionary Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yan, E-mail: ly@ynao.ac.cn

    The k – ω model for turbulence was first proposed by Kolmogorov. A new k – ω model for stellar convection was developed by Li, which could reasonably describe turbulent convection not only in the convectively unstable zone, but also in the overshooting regions. We revised the k – ω model by improving several model assumptions (including the macro-length of turbulence, convective heat flux, and turbulent mixing diffusivity, etc.), making it applicable not only for convective envelopes, but also for convective cores. Eight parameters are introduced in the revised k – ω model. It should be noted that the Reynoldsmore » stress (turbulent pressure) is neglected in the equation of hydrostatic support. We applied it into solar models and 5 M {sub ⊙} stellar models to calibrate the eight model parameters, as well as to investigate the effects of the convective overshooting on the Sun and intermediate mass stellar models.« less

  10. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  11. JEDI Biofuels Models | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Biofuels Models JEDI Biofuels Models The Jobs and Economic Development Impacts (JEDI) biofuel models allow users to estimate economic development impacts from biofuel projects and include default

  12. JEDI Petroleum Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Petroleum Model JEDI Petroleum Model The Jobs and Economic Development Impacts (JEDI) Petroleum Model allows users to estimate economic development impacts from petroleum projects and includes default

  13. The Disk Instability Model for SU UMa systems - a Comparison of the Thermal-Tidal Model and Plain Vanilla Model

    NASA Astrophysics Data System (ADS)

    Cannizzo, John K.

    2017-01-01

    We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.

  14. Organic acid modeling and model validation: Workshop summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferencesmore » of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.« less

  15. Modeling of near wall turbulence and modeling of bypass transition

    NASA Technical Reports Server (NTRS)

    Yang, Z.

    1992-01-01

    The objectives for this project are as follows: (1) Modeling of the near wall turbulence: We aim to develop a second order closure for the near wall turbulence. As a first step of this project, we try to develop a kappa-epsilon model for near wall turbulence. We require the resulting model to be able to handle both near wall turbulence and turbulent flows away from the wall, computationally robust, and applicable for complex flow situations, flow with separation, for example, and (2) Modeling of the bypass transition: We aim to develop a bypass transition model which contains the effect of intermittency. Thus, the model can be used for both the transitional boundary layers and the turbulent boundary layers. We require the resulting model to give a good prediction of momentum and heat transfer within the transitional boundary and a good prediction of the effect of freestream turbulence on transitional boundary layers.

  16. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  17. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a resultmore » of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.« less

  18. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  19. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  20. Protein solubility modeling

    NASA Technical Reports Server (NTRS)

    Agena, S. M.; Pusey, M. L.; Bogle, I. D.

    1999-01-01

    A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.

  1. Interacting damage models mapped onto ising and percolation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toussaint, Renaud; Pride, Steven R.

    The authors introduce a class of damage models on regular lattices with isotropic interactions between the broken cells of the lattice. Quasistatic fiber bundles are an example. The interactions are assumed to be weak, in the sense that the stress perturbation from a broken cell is much smaller than the mean stress in the system. The system starts intact with a surface-energy threshold required to break any cell sampled from an uncorrelated quenched-disorder distribution. The evolution of this heterogeneous system is ruled by Griffith's principle which states that a cell breaks when the release in potential (elastic) energy in themore » system exceeds the surface-energy barrier necessary to break the cell. By direct integration over all possible realizations of the quenched disorder, they obtain the probability distribution of each damage configuration at any level of the imposed external deformation. They demonstrate an isomorphism between the distributions so obtained and standard generalized Ising models, in which the coupling constants and effective temperature in the Ising model are functions of the nature of the quenched-disorder distribution and the extent of accumulated damage. In particular, they show that damage models with global load sharing are isomorphic to standard percolation theory, that damage models with local load sharing rule are isomorphic to the standard ising model, and draw consequences thereof for the universality class and behavior of the autocorrelation length of the breakdown transitions corresponding to these models. they also treat damage models having more general power-law interactions, and classify the breakdown process as a function of the power-law interaction exponent. Last, they also show that the probability distribution over configurations is a maximum of Shannon's entropy under some specific constraints related to the energetic balance of the fracture process, which firmly relates this type of quenched-disorder based

  2. Retrofitted supersymmetric models

    NASA Astrophysics Data System (ADS)

    Bose, Manatosh

    This thesis explores several models of metastable dynamic supersymmetry breaking (MDSB) and a supersymmetric model of hybrid inflation. All of these models possess discrete R-symmetries. We specially focus on the retrofitted models for supersymmetry breaking models. At first we construct retrofitted models of gravity mediation. In these models we explore the genericity of the so-called "split supersymmetry." We show that with the simplest models, where the goldstino multiplet is neutral under the discrete R-symmetry, a split spectrum is not generic. However if the goldstino superfield is charged under some symmetry other than the R-symmetry, then a split spectrum is achievable but not generic. We also present a gravity mediated model where the fine tuning of the Z-boson mass is dictated by a discrete choice rather than a continuous tuning. Then we construct retrofitted models of gauge mediated SUSY breaking. We show that, in these models, if the approximate R-symmetry of the theory is spontaneously broken, the messenger scale is fixed; if explicitly broken by retrofitted couplings, a very small dimensionless number is required; if supergravity corrections are responsible for the symmetry breaking, at least two moderately small couplings are required, and that there is a large range of possible messenger scales. Finally we switch our attention to small field hybrid inflation. We construct a model that yields a spectral index ns = 0.96. Here, we also briefly discuss the possibility of relating the scale of inflation with the dynamics responsible for supersymmetry breaking.

  3. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    NASA Astrophysics Data System (ADS)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  4. A stochastic Iwan-type model for joint behavior variability modeling

    NASA Astrophysics Data System (ADS)

    Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.

    2015-08-01

    This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.

  5. EFFICIENT MODEL-FITTING AND MODEL-COMPARISON FOR HIGH-DIMENSIONAL BAYESIAN GEOSTATISTICAL MODELS. (R826887)

    EPA Science Inventory

    Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...

  6. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    NASA Astrophysics Data System (ADS)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  7. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    PubMed

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  8. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    NASA Astrophysics Data System (ADS)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  9. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  10. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  11. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. © 2014 SETAC.

  12. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems

    PubMed Central

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of “ODEs and formalized flow diagrams” as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler’s behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features. PMID:27270918

  13. JEDI Coal Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Coal Model JEDI Coal Model The Jobs and Economic Development Impacts (JEDI) Coal Model allow users to estimate economic development impacts from coal projects and includes default information that can

  14. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  15. Modeling Instruction: An Effective Model for Science Education

    ERIC Educational Resources Information Center

    Jackson, Jane; Dukerich, Larry; Hestenes, David

    2008-01-01

    The authors describe a Modeling Instruction program that places an emphasis on the construction and application of conceptual models of physical phenomena as a central aspect of learning and doing science. (Contains 1 table.)

  16. mRM - multiscale Routing Model for Land Surface and Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Cuntz, M.; Thober, S.; Mai, J.; Samaniego, L. E.; Gochis, D. J.; Kumar, R.

    2015-12-01

    Routing streamflow through a river network is a basic step within any distributed hydrologic model. It integrates the generated runoff and allows comparison with observed discharge at the outlet of a catchment. The Muskingum routing is a textbook river routing scheme that has been implemented in Earth System Models (e.g., WRF-HYDRO), stand-alone routing schemes (e.g., RAPID), and hydrologic models (e.g., the mesoscale Hydrologic Model). Most implementations suffer from a high computational demand because the spatial routing resolution is fixed to that of the elevation model irrespective of the hydrologic modeling resolution. This is because the model parameters are scale-dependent and cannot be used at other resolutions without re-estimation. Here, we present the multiscale Routing Model (mRM) that allows for a flexible choice of the routing resolution. mRM exploits the Multiscale Parameter Regionalization (MPR) included in the open-source mesoscale Hydrologic Model (mHM, www.ufz.de/mhm) that relates model parameters to physiographic properties and allows to estimate scale-independent model parameters. mRM is currently coupled to mHM and is presented here as stand-alone Free and Open Source Software (FOSS). The mRM source code is highly modular and provides a subroutine for internal re-use in any land surface scheme. mRM is coupled in this work to the state-of-the-art land surface model Noah-MP. Simulation results using mRM are compared with those available in WRF-HYDRO for the Red River during the period 1990-2000. mRM allows to increase the routing resolution from 100m to more than 10km without deteriorating the model performance. Therefore, it speeds up model calculation by reducing the contribution of routing to total runtime from over 80% to less than 5% in the case of WRF-HYDRO. mRM thus makes discharge data available to land surface modeling with only little extra calculations.

  17. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  18. Models in Science Education: Applications of Models in Learning and Teaching Science

    ERIC Educational Resources Information Center

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  19. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  20. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  1. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  2. Standard solar model

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Kim, Y.-C.; Pinsonneault, M. H.

    1992-01-01

    A set of solar models have been constructed, each based on a single modification to the physics of a reference solar model. In addition, a model combining several of the improvements has been calculated to provide a best solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The impact on both the structure and the frequencies of the low-l p-modes of the model to these improvements are discussed. It is found that the combined solar model, which is based on the best physics available (and does not contain any ad hoc assumptions), reproduces the observed oscillation spectrum (for low-l) within the errors associated with the uncertainties in the model physics (primarily opacities).

  3. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  4. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  5. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of

  6. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Space Weather Modeling at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Hesse, M.; Falasca, A.; Johnson, J.; Keller, K.; Kuznetsova, M.; Rastaetter, L.

    2003-04-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership aimed at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of NASA's Living With a Star (LWS) initiative, of the National Space Weather Program Implementation Plan, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. We will demonstrate the capabilities of models resident at CCMC via the analysis of a geomagnetic storm, driven by a shock in the solar wind.

  8. Space Weather Modeling at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Hesse M.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership, which aims at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires dose collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of the National Space Weather Program Implementation Plan, of NASA's Living With a Star (LWS) initiative, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. Special emphasis will be on solar and heliospheric models currently residing at CCMC, and on plans for validation and verification.

  9. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  10. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  11. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  12. Accounting for uncertainty in health economic decision models by using model averaging

    PubMed Central

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-01-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  13. Building Thermal Models

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2017-01-01

    This presentation is meant to be an overview of the model building process It is based on typical techniques (Monte Carlo Ray Tracing for radiation exchange, Lumped Parameter, Finite Difference for thermal solution) used by the aerospace industry This is not intended to be a "How to Use ThermalDesktop" course. It is intended to be a "How to Build Thermal Models" course and the techniques will be demonstrated using the capabilities of ThermalDesktop (TD). Other codes may or may not have similar capabilities. The General Model Building Process can be broken into four top level steps: 1. Build Model; 2. Check Model; 3. Execute Model; 4. Verify Results.

  14. The Instrumental Model

    ERIC Educational Resources Information Center

    Yeates, Devin Rodney

    2011-01-01

    The goal of this dissertation is to enable better predictive models by engaging raw experimental data through the Instrumental Model. The Instrumental Model captures the protocols and procedures of experimental data analysis. The approach is formalized by encoding the Instrumental Model in an XML record. Decoupling the raw experimental data from…

  15. Qualitative Student Models.

    ERIC Educational Resources Information Center

    Clancey, William J.

    The concept of a qualitative model is used as the focus of this review of qualitative student models in order to compare alternative computational models and to contrast domain requirements. The report is divided into eight sections: (1) Origins and Goals (adaptive instruction, qualitative models of processes, components of an artificial…

  16. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Modeling the QBO—Improvements resulting from higher‐model vertical resolution

    PubMed Central

    Zhou, Tiehan; Shindell, D.; Ruedy, R.; Aleinov, I.; Nazarenko, L.; Tausnev, N. L.; Kelley, M.; Sun, S.; Cheng, Y.; Field, R. D.; Faluvegi, G.

    2016-01-01

    Abstract Using the NASA Goddard Institute for Space Studies (GISS) climate model, it is shown that with proper choice of the gravity wave momentum flux entering the stratosphere and relatively fine vertical layering of at least 500 m in the upper troposphere‐lower stratosphere (UTLS), a realistic stratospheric quasi‐biennial oscillation (QBO) is modeled with the proper period, amplitude, and structure down to tropopause levels. It is furthermore shown that the specified gravity wave momentum flux controls the QBO period whereas the width of the gravity wave momentum flux phase speed spectrum controls the QBO amplitude. Fine vertical layering is required for the proper downward extension to tropopause levels as this permits wave‐mean flow interactions in the UTLS region to be resolved in the model. When vertical resolution is increased from 1000 to 500 m, the modeled QBO modulation of the tropical tropopause temperatures increasingly approach that from observations, and the “tape recorder” of stratospheric water vapor also approaches the observed. The transport characteristics of our GISS models are assessed using age‐of‐air and N2O diagnostics, and it is shown that some of the deficiencies in model transport that have been noted in previous GISS models are greatly improved for all of our tested model vertical resolutions. More realistic tropical‐extratropical transport isolation, commonly referred to as the “tropical pipe,” results from the finer vertical model layering required to generate a realistic QBO. PMID:27917258

  18. Modeling the QBO-Improvements resulting from higher-model vertical resolution.

    PubMed

    Geller, Marvin A; Zhou, Tiehan; Shindell, D; Ruedy, R; Aleinov, I; Nazarenko, L; Tausnev, N L; Kelley, M; Sun, S; Cheng, Y; Field, R D; Faluvegi, G

    2016-09-01

    Using the NASA Goddard Institute for Space Studies (GISS) climate model, it is shown that with proper choice of the gravity wave momentum flux entering the stratosphere and relatively fine vertical layering of at least 500 m in the upper troposphere-lower stratosphere (UTLS), a realistic stratospheric quasi-biennial oscillation (QBO) is modeled with the proper period, amplitude, and structure down to tropopause levels. It is furthermore shown that the specified gravity wave momentum flux controls the QBO period whereas the width of the gravity wave momentum flux phase speed spectrum controls the QBO amplitude. Fine vertical layering is required for the proper downward extension to tropopause levels as this permits wave-mean flow interactions in the UTLS region to be resolved in the model. When vertical resolution is increased from 1000 to 500 m, the modeled QBO modulation of the tropical tropopause temperatures increasingly approach that from observations, and the "tape recorder" of stratospheric water vapor also approaches the observed. The transport characteristics of our GISS models are assessed using age-of-air and N 2 O diagnostics, and it is shown that some of the deficiencies in model transport that have been noted in previous GISS models are greatly improved for all of our tested model vertical resolutions. More realistic tropical-extratropical transport isolation, commonly referred to as the "tropical pipe," results from the finer vertical model layering required to generate a realistic QBO.

  19. Model based design introduction: modeling game controllers to microprocessor architectures

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  20. Translation from UML to Markov Model: A Performance Modeling Framework

    NASA Astrophysics Data System (ADS)

    Khan, Razib Hayat; Heegaard, Poul E.

    Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

  1. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.

    PubMed

    Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng

    2014-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.

  2. Rate of Learning Models, Mental Models, and Item Response Theory

    NASA Astrophysics Data System (ADS)

    Pritchard, David E.; Lee, Y.; Bao, L.

    2006-12-01

    We present three learning models that make different assumptions about how the rate of a student's learning depends on the amount that they know already. These are motivated by the mental models of Tabula Rasa, Constructivist, and Tutoring theories. These models predict the postscore for a given prescore after a given period of instruction. Constructivist models show a close connection with Item Response Theory. Comparison with data from both Hake and MIT shows that the Tabula Rasa models not only fit incomparably better, but fit the MIT data within error across a wide range of pretest scores. We discuss the implications of this finding.

  3. Comparison of LiST measles mortality model and WHO/IVB measles model.

    PubMed

    Chen, Wei-Ju

    2011-04-13

    The Lives Saved Tool (LiST) has been developed to estimate the impact of health interventions and can consider multiple interventions simultaneously. Given its increasing usage by donor organizations and national program planner, we compare the LiST measles model to the widely used World Health Organization's Department of Immunization, Vaccines and Biologicals (WHO/IVB) measles model which is used to produce estimates serving as a major indicator of monitoring country measles epidemics and the progress of measles control. We analyzed the WHO/IVB models and the LiST measles model and identified components and assumptions held in each model. We contrasted the important components, and compared results from the two models by applying historical measles containing vaccine (MCV) coverages and the default values of all parameters set in the models. We also conducted analyses following a hypothetical scenario to understand how both models performed when the proportion of population protected by MCV declined to zero percent in short time period. The WHO/IVB measles model and the LiST measles model structures differ: the former is a mixed model which applies surveillance data adjusted for reporting completeness for countries with good disease surveillance system and applies a natural history model for countries with poorer disease control program and surveillance system, and the latter is a cohort model incorporating country-specific cause-of-death (CoD) profiles among children under-five. The trends of estimates of the two models are similar, but the estimates of the first year are different in most of the countries included in the analysis. The two models are comparable if we adjust the measles CoD in the LiST to produce the same baseline estimates. In addition, we used the models to estimate the potential impact of stopping using measles vaccine over a 7-year period. The WHO/IVB model produced similar estimates to the LiST model with adjusted CoD. But the LiST model

  4. Comparison of LiST measles mortality model and WHO/IVB measles model

    PubMed Central

    2011-01-01

    Background The Lives Saved Tool (LiST) has been developed to estimate the impact of health interventions and can consider multiple interventions simultaneously. Given its increasing usage by donor organizations and national program planner, we compare the LiST measles model to the widely used World Health Organization's Department of Immunization, Vaccines and Biologicals (WHO/IVB) measles model which is used to produce estimates serving as a major indicator of monitoring country measles epidemics and the progress of measles control. Methods We analyzed the WHO/IVB models and the LiST measles model and identified components and assumptions held in each model. We contrasted the important components, and compared results from the two models by applying historical measles containing vaccine (MCV) coverages and the default values of all parameters set in the models. We also conducted analyses following a hypothetical scenario to understand how both models performed when the proportion of population protected by MCV declined to zero percent in short time period. Results The WHO/IVB measles model and the LiST measles model structures differ: the former is a mixed model which applies surveillance data adjusted for reporting completeness for countries with good disease surveillance system and applies a natural history model for countries with poorer disease control program and surveillance system, and the latter is a cohort model incorporating country-specific cause-of-death (CoD) profiles among children under-five. The trends of estimates of the two models are similar, but the estimates of the first year are different in most of the countries included in the analysis. The two models are comparable if we adjust the measles CoD in the LiST to produce the same baseline estimates. In addition, we used the models to estimate the potential impact of stopping using measles vaccine over a 7-year period. The WHO/IVB model produced similar estimates to the LiST model with adjusted

  5. Functionalized anatomical models for EM-neuron Interaction modeling

    NASA Astrophysics Data System (ADS)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  6. Scaffolding Learning by Modelling: The Effects of Partially Worked-out Models

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Bollen, Lars; de Jong, Ton; Lazonder, Ard W.

    2016-01-01

    Creating executable computer models is a potentially powerful approach to science learning. Learning by modelling is also challenging because students can easily get overwhelmed by the inherent complexities of the task. This study investigated whether offering partially worked-out models can facilitate students' modelling practices and promote…

  7. How can model comparison help improving species distribution models?

    PubMed

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  8. How Can Model Comparison Help Improving Species Distribution Models?

    PubMed Central

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779

  9. Estuarine modeling: Does a higher grid resolution improve model performance?

    EPA Science Inventory

    Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...

  10. An integrated mathematical model of the human cardiopulmonary system: model development.

    PubMed

    Albanese, Antonio; Cheng, Limei; Ursino, Mauro; Chbat, Nicolas W

    2016-04-01

    Several cardiovascular and pulmonary models have been proposed in the last few decades. However, very few have addressed the interactions between these two systems. Our group has developed an integrated cardiopulmonary model (CP Model) that mathematically describes the interactions between the cardiovascular and respiratory systems, along with their main short-term control mechanisms. The model has been compared with human and animal data taken from published literature. Due to the volume of the work, the paper is divided in two parts. The present paper is on model development and normophysiology, whereas the second is on the model's validation on hypoxic and hypercapnic conditions. The CP Model incorporates cardiovascular circulation, respiratory mechanics, tissue and alveolar gas exchange, as well as short-term neural control mechanisms acting on both the cardiovascular and the respiratory functions. The model is able to simulate physiological variables typically observed in adult humans under normal and pathological conditions and to explain the underlying mechanisms and dynamics. Copyright © 2016 the American Physiological Society.

  11. Good modeling practice guidelines for applying multimedia models in chemical assessments.

    PubMed

    Buser, Andreas M; MacLeod, Matthew; Scheringer, Martin; Mackay, Don; Bonnell, Mark; Russell, Mark H; DePinto, Joseph V; Hungerbühler, Konrad

    2012-10-01

    Multimedia mass balance models of chemical fate in the environment have been used for over 3 decades in a regulatory context to assist decision making. As these models become more comprehensive, reliable, and accepted, there is a need to recognize and adopt principles of Good Modeling Practice (GMP) to ensure that multimedia models are applied with transparency and adherence to accepted scientific principles. We propose and discuss 6 principles of GMP for applying existing multimedia models in a decision-making context, namely 1) specification of the goals of the model assessment, 2) specification of the model used, 3) specification of the input data, 4) specification of the output data, 5) conduct of a sensitivity and possibly also uncertainty analysis, and finally 6) specification of the limitations and limits of applicability of the analysis. These principles are justified and discussed with a view to enhancing the transparency and quality of model-based assessments. Copyright © 2012 SETAC.

  12. Modeling Renewable Penertration Using a Network Economic Model

    NASA Astrophysics Data System (ADS)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  13. Modeling and Identification for Vector Propulsion of an Unmanned Surface Vehicle: Three Degrees of Freedom Model and Response Model.

    PubMed

    Mu, Dongdong; Wang, Guofeng; Fan, Yunsheng; Sun, Xiaojie; Qiu, Bingbing

    2018-06-08

    This paper presents a complete scheme for research on the three degrees of freedom model and response model of the vector propulsion of an unmanned surface vehicle. The object of this paper is “Lanxin”, an unmanned surface vehicle (7.02 m × 2.6 m), which is equipped with a single vector propulsion device. First, the “Lanxin” unmanned surface vehicle and the related field experiments (turning test and zig-zag test) are introduced and experimental data are collected through various sensors. Then, the thrust of the vector thruster is estimated by the empirical formula method. Third, using the hypothesis and simplification, the three degrees of freedom model and the response model of USV are deduced and established, respectively. Fourth, the parameters of the models (three degrees of freedom model, response model and thruster servo model) are obtained by system identification, and we compare the simulated turning test and zig-zag test with the actual data to verify the accuracy of the identification results. Finally, the biggest advantage of this paper is that it combines theory with practice. Based on identified response model, simulation and practical course keeping experiments are carried out to further verify feasibility and correctness of modeling and identification.

  14. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    DOE PAGES

    Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; ...

    2015-10-29

    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less

  15. Finite element modeling of a 3D coupled foot-boot model.

    PubMed

    Qiu, Tian-Xia; Teo, Ee-Chon; Yan, Ya-Bo; Lei, Wei

    2011-12-01

    Increasingly, musculoskeletal models of the human body are used as powerful tools to study biological structures. The lower limb, and in particular the foot, is of interest because it is the primary physical interaction between the body and the environment during locomotion. The goal of this paper is to adopt the finite element (FE) modeling and analysis approaches to create a state-of-the-art 3D coupled foot-boot model for future studies on biomechanical investigation of stress injury mechanism, foot wear design and parachute landing fall simulation. In the modeling process, the foot-ankle model with lower leg was developed based on Computed Tomography (CT) images using ScanIP, Surfacer and ANSYS. Then, the boot was represented by assembling the FE models of upper, insole, midsole and outsole built based on the FE model of the foot-ankle, and finally the coupled foot-boot model was generated by putting together the models of the lower limb and boot. In this study, the FE model of foot and ankle was validated during balance standing. There was a good agreement in the overall patterns of predicted and measured plantar pressure distribution published in literature. The coupled foot-boot model will be fully validated in the subsequent works under both static and dynamic loading conditions for further studies on injuries investigation in military and sports, foot wear design and characteristics of parachute landing impact in military. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    PubMed

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  17. Mediterranean maquis fuel model development and mapping to support fire modeling

    NASA Astrophysics Data System (ADS)

    Bacciu, V.; Arca, B.; Pellizzaro, G.; Salis, M.; Ventura, A.; Spano, D.; Duce, P.

    2009-04-01

    Fuel load data and fuel model maps represent a critical issue for fire spread and behaviour modeling. The availability of accurate input data at different spatial and temporal scales can allow detailed analysis and predictions of fire hazard and fire effects across a landscape. Fuel model data are used in spatially explicit fire growth models to attain fire behaviour information for fuel management in prescribed fires, fire management applications, firefighters training, smoke emissions, etc. However, fuel type characteristics are difficult to be parameterized due to their complexity and variability: live and dead materials with different size contribute in different ways to the fire spread and behaviour. In the last decades, a strong help was provided by the use of remote sensing imagery at high spatial and spectral resolution. Such techniques are able to capture fine scale fuel distributions for accurate fire growth projections. Several attempts carried out in Europe were devoted to fuel classification and map characterization. In Italy, fuel load estimation and fuel model definition are still critical issues to be addressed due to the lack of detailed information. In this perspective, the aim of the present work was to propose an integrated approach based on field data collection, fuel model development and fuel model mapping to provide fuel models for the Mediterranean maquis associations. Field data needed for the development of fuel models were collected using destructive and non destructive measurements in experimental plots located in Northern Sardinia (Italy). Statistical tests were used to identify the main fuel types that were classified into four custom fuel models. Subsequently, a supervised classification by the Maximum Likelihood algorithm was applied on IKONOS images to identify and map the different types of maquis vegetation. The correspondent fuel model was then associated to each vegetation type to obtain the fuel model map. The results show the

  18. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  19. Reference Manual for the System Advisor Model's Wind Power Performance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, J.; Jorgenson, J.; Gilman, P.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface andmore » as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.« less

  20. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  1. The Carbon-Land Model Intercomparison Project (C-LAMP): A Model-Data Comparison System for Evaluation of Coupled Biosphere-Atmosphere Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forrest M; Randerson, Jim; Thornton, Peter E

    2009-01-01

    The need to capture important climate feebacks in general circulation models (GCMs) has resulted in new efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, now often referred to as Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results, suggesting that a more rigorous set of offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are warranted. The Carbon-Land Model Intercomparison Project (C-LAMP) providesmore » a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). C-LAMP provides feedback to the modeling community regarding model improvements and to the measurement community by suggesting new observational campaigns. C-LAMP Experiment 1 consists of a set of uncoupled simulations of terrestrial carbon models specifically designed to examine the ability of the models to reproduce surface carbon and energy fluxes at multiple sites and to exhibit the influence of climate variability, prescribed atmospheric carbon dioxide (CO{sub 2}), nitrogen (N) deposition, and land cover change on projections of terrestrial carbon fluxes during the 20th century. Experiment 2 consists of partially coupled simulations of the terrestrial carbon model with an active atmosphere model exchanging energy and moisture fluxes. In all experiments, atmospheric CO{sub 2} follows the prescribed historical trajectory from C{sup 4}MIP. In Experiment 2, the atmosphere model is forced with prescribed sea surface temperatures (SSTs) and corresponding sea ice concentrations from the Hadley Centre; prescribed CO{sub 2} is radiatively active; and land, fossil fuel, and ocean CO{sub 2} fluxes are advected by the model. Both sets of

  2. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  3. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Nutaro, James J

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less

  4. Modelling the optical properties of aerosols in a chemical transport model

    NASA Astrophysics Data System (ADS)

    Andersson, E.; Kahnert, M.

    2015-12-01

    According to the IPCC fifth assessment report (2013), clouds and aerosols still contribute to the largest uncertainty when estimating and interpreting changes to the Earth's energy budget. Therefore, understanding the interaction between radiation and aerosols is both crucial for remote sensing observations and modelling the climate forcing arising from aerosols. Carbon particles are the largest contributor to the aerosol absorption of solar radiation, thereby enhancing the warming of the planet. Modelling the radiative properties of carbon particles is a hard task and involves many uncertainties arising from the difficulties of accounting for the morphologies and heterogeneous chemical composition of the particles. This study aims to compare two ways of modelling the optical properties of aerosols simulated by a chemical transport model. The first method models particle optical properties as homogeneous spheres and are externally mixed. This is a simple model that is particularly easy to use in data assimilation methods, since the optics model is linear. The second method involves a core-shell internal mixture of soot, where sulphate, nitrate, ammonia, organic carbon, sea salt, and water are contained in the shell. However, by contrast to previously used core-shell models, only part of the carbon is concentrated in the core, while the remaining part is homogeneously mixed with the shell. The chemical transport model (CTM) simulations are done regionally over Europe with the Multiple-scale Atmospheric Transport and CHemistry (MATCH) model, developed by the Swedish Meteorological and Hydrological Institute (SMHI). The MATCH model was run with both an aerosol dynamics module, called SALSA, and with a regular "bulk" approach, i.e., a mass transport model without aerosol dynamics. Two events from 2007 are used in the analysis, one with high (22/12-2007) and one with low (22/6-2007) levels of elemental carbon (EC) over Europe. The results of the study help to assess the

  5. Theoretical kinetic studies of models for binding myosin subfragment-1 to regulated actin: Hill model versus Geeves model.

    PubMed Central

    Chen , Y; Yan, B; Chalovich, J M; Brenner, B

    2001-01-01

    It was previously shown that a one-dimensional Ising model could successfully simulate the equilibrium binding of myosin S1 to regulated actin filaments (T. L. Hill, E. Eisenberg and L. Greene, Proc. Natl. Acad. Sci. U.S.A. 77:3186-3190, 1980). However, the time course of myosin S1 binding to regulated actin was thought to be incompatible with this model, and a three-state model was subsequently developed (D. F. McKillop and M. A. Geeves, Biophys. J. 65:693-701, 1993). A quantitative analysis of the predicted time course of myosin S1 binding to regulated actin, however, was never done for either model. Here we present the procedure for the theoretical evaluation of the time course of myosin S1 binding for both models and then show that 1) the Hill model can predict the "lag" in the binding of myosin S1 to regulated actin that is observed in the absence of Ca++ when S1 is in excess of actin, and 2) both models generate very similar families of binding curves when [S1]/[actin] is varied. This result shows that, just based on the equilibrium and pre-steady-state kinetic binding data alone, it is not possible to differentiate between the two models. Thus, the model of Hill et al. cannot be ruled out on the basis of existing pre-steady-state and equilibrium binding data. Physical mechanisms underlying the generation of the lag in the Hill model are discussed. PMID:11325734

  6. Modelling land use change with generalized linear models--a multi-model analysis of change between 1860 and 2000 in Gallatin Valley, Montana.

    PubMed

    Aspinall, Richard

    2004-08-01

    This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed

  7. The EMEP MSC-W chemical transport model - Part 1: Model description

    NASA Astrophysics Data System (ADS)

    Simpson, D.; Benedictow, A.; Berge, H.; Bergström, R.; Emberson, L. D.; Fagerli, H.; Hayman, G. D.; Gauss, M.; Jonson, J. E.; Jenkin, M. E.; Nyíri, A.; Richter, C.; Semeena, V. S.; Tsyro, S.; Tuovinen, J.-P.; Valdebenito, Á.; Wind, P.

    2012-02-01

    The Meteorological Synthesizing Centre-West (MSC-W) of the European Monitoring and Evaluation Programme (EMEP) has been performing model calculations in support of the Convention on Long Range Transboundary Air Pollution (CLRTAP) for more than 30 yr. The EMEP MSC-W chemical transport model is still one of the key tools within European air pollution policy assessments. Traditionally, the EMEP model has covered all of Europe with a resolution of about 50 × 50 km2, and extending vertically from ground level to the tropopause (100 hPa). The model has undergone substantial development in recent years, and is now applied on scales ranging from local (ca. 5 km grid size) to global (with 1 degree resolution). The model is used to simulate photo-oxidants and both inorganic and organic aerosols. In 2008 the EMEP model was released for the first time as public domain code, along with all required input data for model runs for one year. Since then, many changes have been made to the model physics, and input data. The second release of the EMEP MSC-W model became available in mid 2011, and a new release is targeted for early 2012. This publication is intended to document this third release of the EMEP MSC-W model. The model formulations are given, along with details of input data-sets which are used, and brief background on some of the choices made in the formulation are presented. The model code itself is available at www.emep.int, along with the data required to run for a full year over Europe.

  8. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  9. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  10. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. MODEL VERSION CONTROL FOR GREAT LAKES MODELS ON UNIX SYSTEMS

    EPA Science Inventory

    Scientific results of the Lake Michigan Mass Balance Project were provided where atrazine was measured and modeled. The presentation also provided the model version control system which has been used for models at Grosse Ile for approximately a decade and contains various version...

  12. Geometrical model for DBMS: an experimental DBMS using IBM solid modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, D.E.D.L.

    1985-01-01

    This research presents a new model for data base management systems (DBMS). The new model, Geometrical DBMS, is based on using solid modelling technology in designing and implementing DBMS. The Geometrical DBMS is implemented using the IBM solid modelling Geometric Design Processor (GDP). Built basically on computer-graphics concepts, Geometrical DBMS is indeed a unique model. Traditionally, researchers start with one of the existent DBMS models and then put a graphical front end on it. In Geometrical DBMS, the graphical aspect of the model is not an alien concept tailored to the model but is, as a matter of fact, themore » atom around which the model is designed. The main idea in Geometrical DBMS is to allow the user and the system to refer to and manipulate data items as a solid object in 3D space, and representing a record as a group of logically related solid objects. In Geometical DBMS, hierarchical structure is used to present the data relations and the user sees the data as a group of arrays; yet, for the user and the system together, the data structure is a multidimensional tree.« less

  13. AIDS Epidemiological models

    NASA Astrophysics Data System (ADS)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  14. Equivalent model and power flow model for electric railway traction network

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-05-01

    An equivalent model of the Cable Traction Network (CTN) considering the distributed capacitance effect of the cable system is proposed. The model can be divided into 110kV side and 27.5kV side two kinds. The 110kV side equivalent model can be used to calculate the power supply capacity of the CTN. The 27.5kV side equivalent model can be used to solve the voltage of the catenary. Based on the equivalent simplified model of CTN, the power flow model of CTN which involves the reactive power compensation coefficient and the interaction of voltage and current, is derived.

  15. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks

    PubMed Central

    Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina

    2018-01-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699

  16. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    PubMed

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  17. Animation Augmented Reality Book Model (AAR Book Model) to Enhance Teamwork

    ERIC Educational Resources Information Center

    Chujitarom, Wannaporn; Piriyasurawong, Pallop

    2017-01-01

    This study aims to synthesize an Animation Augmented Reality Book Model (AAR Book Model) to enhance teamwork and to assess the AAR Book Model to enhance teamwork. Samples are five specialists that consist of one animation specialist, two communication and information technology specialists, and two teaching model design specialists, selected by…

  18. ENSO Simulation in Coupled Ocean-Atmosphere Models: Are the Current Models Better?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    AchutaRao, K; Sperber, K R

    Maintaining a multi-model database over a generation or more of model development provides an important framework for assessing model improvement. Using control integrations, we compare the simulation of the El Nino/Southern Oscillation (ENSO), and its extratropical impact, in models developed for the 2007 Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report with models developed in the late 1990's (the so-called Coupled Model Intercomparison Project-2 [CMIP2] models). The IPCC models tend to be more realistic in representing the frequency with which ENSO occurs, and they are better at locating enhanced temperature variability over the eastern Pacific Ocean. When compared withmore » reanalyses, the IPCC models have larger pattern correlations of tropical surface air temperature than do the CMIP2 models during the boreal winter peak phase of El Nino. However, for sea-level pressure and precipitation rate anomalies, a clear separation in performance between the two vintages of models is not as apparent. The strongest improvement occurs for the modeling groups whose CMIP2 model tended to have the lowest pattern correlations with observations. This has been checked by subsampling the multi-century IPCC simulations in a manner to be consistent with the single 80-year time segment available from CMIP2. Our results suggest that multi-century integrations may be required to statistically assess model improvement of ENSO. The quality of the El Nino precipitation composite is directly related to the fidelity of the boreal winter precipitation climatology, highlighting the importance of reducing systematic model error. Over North America distinct improvement of El Nino forced boreal winter surface air temperature, sea-level pressure, and precipitation rate anomalies in the IPCC models occurs. This improvement, is directly proportional to the skill of the tropical El Nino forced precipitation anomalies.« less

  19. Predictive Capability Maturity Model for computational modeling and simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronauticsmore » and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less

  20. Identification of walking human model using agent-based modelling

    NASA Astrophysics Data System (ADS)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  1. Experimental & Numerical Modeling of Non-combusting Model Firebrands' Transport

    NASA Astrophysics Data System (ADS)

    Tohidi, Ali; Kaye, Nigel

    2016-11-01

    Fire spotting is one of the major mechanisms of wildfire spread. Three phases of this phenomenon are firebrand formation and break-off from burning vegetation, lofting and downwind transport of firebrands through the velocity field of the wildfire, and spot fire ignition upon landing. The lofting and downwind transport phase is modeled by conducting large-scale wind tunnel experiments. Non-combusting rod-like model firebrands with different aspect ratios are released within the velocity field of a jet in a boundary layer cross-flow that approximates the wildfire velocity field. Characteristics of the firebrand dispersion are quantified by capturing the full trajectory of the model firebrands using the developed image processing algorithm. The results show that the lofting height has a direct impact on the maximum travel distance of the model firebrands. Also, the experimental results are utilized for validation of a highly scalable coupled stochastic & parametric firebrand flight model that, couples the LES-resolved velocity field of a jet-in-nonuniform-cross-flow (JINCF) with a 3D fully deterministic 6-degrees-of-freedom debris transport model. The validation results show that the developed numerical model is capable of estimating average statistics of the firebrands' flight. Authors would like to thank support of the National Science Foundation under Grant No. 1200560. Also, the presenter (Ali Tohid) would like to thank Dr. Michael Gollner from the University of Maryland College Park for the conference participation support.

  2. Modelling and model predictive control for a bicycle-rider system

    NASA Astrophysics Data System (ADS)

    Chu, T. D.; Chen, C. K.

    2018-01-01

    This study proposes a bicycle-rider control model based on model predictive control (MPC). First, a bicycle-rider model with leaning motion of the rider's upper body is developed. The initial simulation data of the bicycle rider are then used to identify the linear model of the system in state-space form for MPC design. Control characteristics of the proposed controller are assessed by simulating the roll-angle tracking control. In this riding task, the MPC uses steering and leaning torques as the control inputs to control the bicycle along a reference roll angle. The simulation results in different cases have demonstrated the applicability and performance of the MPC for bicycle-rider modelling.

  3. One-month validation of the Space Weather Modeling Framework geospace model

    NASA Astrophysics Data System (ADS)

    Haiducek, J. D.; Welling, D. T.; Ganushkina, N. Y.; Morley, S.; Ozturk, D. S.

    2017-12-01

    The Space Weather Modeling Framework (SWMF) geospace model consists of a magnetohydrodynamic (MHD) simulation coupled to an inner magnetosphere model and an ionosphere model. This provides a predictive capability for magnetopsheric dynamics, including ground-based and space-based magnetic fields, geomagnetic indices, currents and densities throughout the magnetosphere, cross-polar cap potential, and magnetopause and bow shock locations. The only inputs are solar wind parameters and F10.7 radio flux. We have conducted a rigorous validation effort consisting of a continuous simulation covering the month of January, 2005 using three different model configurations. This provides a relatively large dataset for assessment of the model's predictive capabilities. We find that the model does an excellent job of predicting the Sym-H index, and performs well at predicting Kp and CPCP during active times. Dayside magnetopause and bow shock positions are also well predicted. The model tends to over-predict Kp and CPCP during quiet times and under-predicts the magnitude of AL during disturbances. The model under-predicts the magnitude of night-side geosynchronous Bz, and over-predicts the radial distance to the flank magnetopause and bow shock. This suggests that the model over-predicts stretching of the magnetotail and the overall size of the magnetotail. With the exception of the AL index and the nightside geosynchronous magnetic field, we find the results to be insensitive to grid resolution.

  4. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  5. Flying with the winds: differential migration strategies in relation to winds in moth and songbirds.

    PubMed

    Åkesson, Susanne

    2016-01-01

    The gamma Y moth selects to migrate in stronger winds compared to songbirds, enabling fast transport to distant breeding sites, but a lower precision in orientation as the moth allows itself to be drifted by the winds. Photo: Ian Woiwod. In Focus: Chapman, J.R., Nilsson, C., Lim, K.S., Bäckman, J., Reynolds, D.R. & Alerstam, T. (2015) Adaptive strategies in nocturnally migrating insects and songbirds: contrasting responses to winds. Journal of Animal Ecology, In press Insects and songbirds regularly migrate long distances across continents and seas. During these nocturnal migrations, they are exposed to a fluid medium, the air, in which they transport themselves by flight at similar speeds as the winds may carry them. It is crucial for an animal to select the most favourable flight conditions relative to winds to minimize the distance flown on a given amount of fuel and to avoid hazardous situations. Chapman et al. (2015a) showed contrasting strategies in how moths initiate migration predominantly under tailwind conditions, allowing themselves to drift to a larger extent and gain ground speed as compared to nocturnal songbird migrants. The songbirds use more variable flight strategies in relation to winds, where they sometimes allow themselves to drift, and at other occasions compensate for wind drift. This study shows how insects and birds have differentially adapted to migration in relation to winds, which is strongly dependent on their own flight capability, with higher flexibility enabling fine-tuned responses to keep a time programme and reach a goal in songbirds compared to in insects. © 2015 The Author. Journal of Animal Ecology © 2015 British Ecological Society.

  6. Decelerations of Parachute Opening Shock in Skydivers.

    PubMed

    Gladh, Kristofer; Lo Martire, Riccardo; Äng, Björn O; Lindholm, Peter; Nilsson, Jenny; Westman, Anton

    2017-02-01

    High prevalence of neck pain among skydivers is related to parachute opening shock (POS) exposure, but few investigations of POS deceleration have been made. Existing data incorporate equipment movements, limiting its representability of skydiver deceleration. This study aims to describe POS decelerations and compare human- with equipment-attached data. Wearing two triaxial accelerometers placed on the skydiver (neck-sensor) and equipment (rig-sensor), 20 participants made 2 skydives each. Due to technical issues, data from 35 skydives made by 19 participants were collected. Missing data were replaced using data substitution techniques. Acceleration axes were defined as posterior to anterior (+ax), lateral right (+ay), and caudal to cranial (+az). Deceleration magnitude [amax (G)] and jerks (G · s-1) during POS were analyzed. Two distinct phases related to skydiver positioning and acceleration direction were observed: 1) the x-phase (characterized by -ax, rotating the skydiver); and 2) the z-phase (characterized by +az, skydiver vertically oriented). Compared to the rig-sensor, the neck-sensor yielded lower amax (3.16 G vs. 6.96 G) and jerk (56.3 G · s-1 vs. 149.0 G · s-1) during the x-phase, and lower jerk (27.7 G · s-1 vs. 54.5 G · s-1) during the z-phase. The identified phases during POS should be considered in future neck pain preventive strategies. Accelerometer data differed, suggesting human-placed accelerometry to be more valid for measuring human acceleration.Gladh K, Lo Martire R, Äng BO, Lindholm P, Nilsson J, Westman A. Decelerations of parachute opening shock in skydivers. Aerosp Med Hum Perform. 2017; 88(2):121-127.

  7. Hey Teacher, Don't Leave Them Kids Alone: Action Is Better for Memory than Reading.

    PubMed

    Hainselin, Mathieu; Picard, Laurence; Manolli, Patrick; Vankerkore-Candas, Sophie; Bourdin, Béatrice

    2017-01-01

    There is no consensus on how the enactment effect (EE), although it is robust, enhances memory. Researchers are currently investigating the cognitive processes underlying this effect, mostly during adulthood; the link between EE and crucial function identified in adulthood such as episodic memory and binding process remains elusive. Therefore, this study aims to verify the existence of EE in 6-10 years old and assess cognitive functions potentially linked to this effect in order to shed light on the mechanisms underlying the EE during childhood. Thirty-five children (15 second graders and 20 fifth graders) were included in this study. They encoded 24 action phrases from a protocol adapted from Hainselin et al. (2014). Encoding occurred under four conditions: Verbal Task, Listening Task, Experimenter-Performed Task, and Subject-Performed Task. Memory performance was assessed for free and cued recall, as well as source memory abilities. ANOVAS were conducted to explore age-related effects on the different scores according to encoding conditions. Correlations between EE scores (Subject-Performed Task/Listening Task) and binding memory scores (short-term binding and episodic memory) were run. Both groups benefited from EE. However, in both groups, performance did not significantly differ between Subject-Performed Task and Experimenter-Performed Task. A positive correlation was found between EE and episodic memory score for second graders and a moderate negative correlation was found between EE and binding scores for fifth graders. Our results confirm the existence of EE in 6 and 10 year olds, but they do not support the multimodal theory (Engelkamp, 2001) or the "glue" theory (Kormi-Nouri and Nilsson, 2001). This suggests instead that episodic memory might not underlie EE during early childhood.

  8. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  9. Modeling for Battery Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient

  10. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural

  11. JEDI Natural Gas Model | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Natural Gas Model JEDI Natural Gas Model The Jobs and Economic Development Impacts (JEDI) Natural Gas model allows users to estimate economic development impacts from natural gas power generation -specific data should be used to obtain the best estimate of economic development impacts. This model has

  12. Model identification using stochastic differential equation grey-box models in diabetes.

    PubMed

    Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik

    2013-03-01

    The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.

  13. The Space Weather Modeling Framework (SWMF): Models and Validation

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Toth, Gabor; Sokolov, Igor; de Zeeuw, Darren; van der Holst, Bart; Ridley, Aaron; Manchester, Ward, IV

    In the last decade our group at the Center for Space Environment Modeling (CSEM) has developed the Space Weather Modeling Framework (SWMF) that efficiently couples together different models describing the interacting regions of the space environment. Many of these domain models (such as the global solar corona, the inner heliosphere or the global magneto-sphere) are based on MHD and are represented by our multiphysics code, BATS-R-US. SWMF is a powerful tool for coupling regional models describing the space environment from the solar photosphere to the bottom of the ionosphere. Presently, SWMF contains over a dozen components: the solar corona (SC), eruptive event generator (EE), inner heliosphere (IE), outer heliosphere (OH), solar energetic particles (SE), global magnetosphere (GM), inner magnetosphere (IM), radiation belts (RB), plasmasphere (PS), ionospheric electrodynamics (IE), polar wind (PW), upper atmosphere (UA) and lower atmosphere (LA). This talk will present an overview of SWMF, new results obtained with improved physics as well as some validation studies.

  14. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  15. Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models.

    PubMed

    Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B

    2015-01-01

    The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.

  16. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models

    PubMed Central

    Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng

    2013-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646

  17. The VSGB 2.0 Model: A Next Generation Energy Model for High Resolution Protein Structure Modeling

    PubMed Central

    Li, Jianing; Abel, Robert; Zhu, Kai; Cao, Yixiang; Zhao, Suwen; Friesner, Richard A.

    2011-01-01

    A novel energy model (VSGB 2.0) for high resolution protein structure modeling is described, which features an optimized implicit solvent model as well as physics-based corrections for hydrogen bonding, π-π interactions, self-contact interactions and hydrophobic interactions. Parameters of the VSGB 2.0 model were fit to a crystallographic database of 2239 single side chain and 100 11–13 residue loop predictions. Combined with an advanced method of sampling and a robust algorithm for protonation state assignment, the VSGB 2.0 model was validated by predicting 115 super long loops up to 20 residues. Despite the dramatically increasing difficulty in reconstructing longer loops, a high accuracy was achieved: all of the lowest energy conformations have global backbone RMSDs better than 2.0 Å from the native conformations. Average global backbone RMSDs of the predictions are 0.51, 0.63, 0.70, 0.62, 0.80, 1.41, and 1.59 Å for 14, 15, 16, 17, 18, 19, and 20 residue loop predictions, respectively. When these results are corrected for possible statistical bias as explained in the text, the average global backbone RMSDs are 0.61, 0.71, 0.86, 0.62, 1.06, 1.67, and 1.59 Å. Given the precision and robustness of the calculations, we believe that the VSGB 2.0 model is suitable to tackle “real” problems, such as biological function modeling and structure-based drug discovery. PMID:21905107

  18. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics

    PubMed Central

    Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E

    2017-01-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052

  19. Comparison between fully distributed model and semi-distributed model in urban hydrology modeling

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe

    2013-04-01

    Water management in urban areas is becoming more and more complex, especially because of a rapid increase of impervious areas. There will also possibly be an increase of extreme precipitation due to climate change. The aims of the devices implemented to handle the large amount of water generate by urban areas such as storm water retention basins are usually twofold: ensure pluvial flood protection and water depollution. These two aims imply opposite management strategies. To optimize the use of these devices there is a need to implement urban hydrological models and improve fine-scale rainfall estimation, which is the most significant input. In this paper we suggest to compare two models and their sensitivity to small-scale rainfall variability on a 2.15 km2 urban area located in the County of Val-de-Marne (South-East of Paris, France). The average impervious coefficient is approximately 34%. In this work two types of models are used. The first one is CANOE which is semi-distributed. Such models are widely used by practitioners for urban hydrology modeling and urban water management. Indeed, they are easily configurable and the computation time is reduced, but these models do not take fully into account either the variability of the physical properties or the variability of the precipitations. An alternative is to use distributed models that are harder to configure and require a greater computation time, but they enable a deeper analysis (especially at small scales and upstream) of the processes at stake. We used the Multi-Hydro fully distributed model developed at the Ecole des Ponts ParisTech. It is an interacting core between open source software packages, each of them representing a portion of the water cycle in urban environment. Four heavy rainfall events that occurred between 2009 and 2011 are analyzed. The data comes from the Météo-France radar mosaic and the resolution is 1 km in space and 5 min in time. The closest radar of the Météo-France network is

  20. Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition

    PubMed Central

    Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601

  1. Distinguishing Antimicrobial Models with Different Resistance Mechanisms via Population Pharmacodynamic Modeling

    PubMed Central

    Jacobs, Matthieu; Grégoire, Nicolas; Couet, William; Bulitta, Jurgen B.

    2016-01-01

    Semi-mechanistic pharmacokinetic-pharmacodynamic (PK-PD) modeling is increasingly used for antimicrobial drug development and optimization of dosage regimens, but systematic simulation-estimation studies to distinguish between competing PD models are lacking. This study compared the ability of static and dynamic in vitro infection models to distinguish between models with different resistance mechanisms and support accurate and precise parameter estimation. Monte Carlo simulations (MCS) were performed for models with one susceptible bacterial population without (M1) or with a resting stage (M2), a one population model with adaptive resistance (M5), models with pre-existing susceptible and resistant populations without (M3) or with (M4) inter-conversion, and a model with two pre-existing populations with adaptive resistance (M6). For each model, 200 datasets of the total bacterial population were simulated over 24h using static antibiotic concentrations (256-fold concentration range) or over 48h under dynamic conditions (dosing every 12h; elimination half-life: 1h). Twelve-hundred random datasets (each containing 20 curves for static or four curves for dynamic conditions) were generated by bootstrapping. Each dataset was estimated by all six models via population PD modeling to compare bias and precision. For M1 and M3, most parameter estimates were unbiased (<10%) and had good imprecision (<30%). However, parameters for adaptive resistance and inter-conversion for M2, M4, M5 and M6 had poor bias and large imprecision under static and dynamic conditions. For datasets that only contained viable counts of the total population, common statistical criteria and diagnostic plots did not support sound identification of the true resistance mechanism. Therefore, it seems advisable to quantify resistant bacteria and characterize their MICs and resistance mechanisms to support extended simulations and translate from in vitro experiments to animal infection models and

  2. Agricultural model intercomparison and improvement project: Overview of model intercomparisons

    USDA-ARS?s Scientific Manuscript database

    Improvement of crop simulation models to better estimate growth and yield is one of the objectives of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The overall goal of AgMIP is to provide an assessment of crop model through rigorous intercomparisons and evaluate future clim...

  3. Modeling crop water productivity using a coupled SWAT-MODSIM model

    USDA-ARS?s Scientific Manuscript database

    This study examines the water productivity of irrigated wheat and maize yields in Karkheh River Basin (KRB) in the semi-arid region of Iran using a coupled modeling approach consisting of the hydrological model (SWAT) and the river basin water allocation model (MODSIM). Dynamic irrigation requireme...

  4. Investigation of prospective teachers' knowledge and understanding of models and modeling and their attitudes towards the use of models in science education

    NASA Astrophysics Data System (ADS)

    Aktan, Mustafa B.

    The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of

  5. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  6. Modeling rainfall-runoff relationship using multivariate GARCH model

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2013-08-01

    The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.

  7. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  8. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  9. MULTIVARIATE RECEPTOR MODELS AND MODEL UNCERTAINTY. (R825173)

    EPA Science Inventory

    Abstract

    Estimation of the number of major pollution sources, the source composition profiles, and the source contributions are the main interests in multivariate receptor modeling. Due to lack of identifiability of the receptor model, however, the estimation cannot be...

  10. Global and regional ecosystem modeling: comparison of model outputs and field measurements

    NASA Astrophysics Data System (ADS)

    Olson, R. J.; Hibbard, K.

    2003-04-01

    The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of

  11. Level-Specific Evaluation of Model Fit in Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Ryu, Ehri; West, Stephen G.

    2009-01-01

    In multilevel structural equation modeling, the "standard" approach to evaluating the goodness of model fit has a potential limitation in detecting the lack of fit at the higher level. Level-specific model fit evaluation can address this limitation and is more informative in locating the source of lack of model fit. We proposed level-specific test…

  12. Relating Factor Models for Longitudinal Data to Quasi-Simplex and NARMA Models

    ERIC Educational Resources Information Center

    Rovine, Michael J.; Molenaar, Peter C. M.

    2005-01-01

    In this article we show the one-factor model can be rewritten as a quasi-simplex model. Using this result along with addition theorems from time series analysis, we describe a common general model, the nonstationary autoregressive moving average (NARMA) model, that includes as a special case, any latent variable model with continuous indicators…

  13. Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model

    NASA Astrophysics Data System (ADS)

    Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus

    2017-12-01

    The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.

  14. Modeling Earth's Ring Current Using The CIMI Model

    NASA Astrophysics Data System (ADS)

    Craven, J. D., II; Perez, J. D.; Buzulukova, N.; Fok, M. C. H.

    2015-12-01

    Earth's ring current is a result of the injection of charged particles trapped in the magnetosphere from solar storms. The enhancement of the ring current particles produces magnetic depressions and disturbances to the Earth's magnetic field known as geomagnetic storms, which have been modeled using the comprehensive inner magnetosphere-ionosphere (CIMI) model. The purpose of this model is to identify and understand the physical processes that control the dynamics of the geomagnetic storms. The basic procedure was to use the CIMI model for the simulation of 15 storms since 2009. Some of the storms were run multiple times, but with varying parameters relating to the dynamics of the Earth's magnetic field, particle fluxes, and boundary conditions of the inner-magnetosphere. Results and images were placed in the TWINS online catalog page for further analysis and discussion. Particular areas of interest were extreme storm events. A majority of storms simulated had average DST values of -100 nT; these extreme storms exceeded DST values of -200 nT. The continued use of the CIMI model will increase knowledge of the interactions and processes of the inner-magnetosphere as well as lead to a better understanding of extreme solar storm events for the future advancement of space weather physics.

  15. Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).

    PubMed

    Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco

    2013-10-01

    In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.

    PubMed

    Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2017-12-01

    It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.

  17. Hierarchical Bass model

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  18. Mixed models approaches for joint modeling of different types of responses.

    PubMed

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  19. Dynamical Downscaling of NASA/GISS ModelE: Continuous, Multi-Year WRF Simulations

    NASA Astrophysics Data System (ADS)

    Otte, T.; Bowden, J. H.; Nolte, C. G.; Otte, M. J.; Herwehe, J. A.; Faluvegi, G.; Shindell, D. T.

    2010-12-01

    The WRF Model is being used at the U.S. EPA for dynamical downscaling of the NASA/GISS ModelE fields to assess regional impacts of climate change in the United States. The WRF model has been successfully linked to the ModelE fields in their raw hybrid vertical coordinate, and continuous, multi-year WRF downscaling simulations have been performed. WRF will be used to downscale decadal time slices of ModelE for recent past, current, and future climate as the simulations being conducted for the IPCC Fifth Assessment Report become available. This presentation will focus on the sensitivity to interior nudging within the RCM. The use of interior nudging for downscaled regional climate simulations has been somewhat controversial over the past several years but has been recently attracting attention. Several recent studies that have used reanalysis (i.e., verifiable) fields as a proxy for GCM input have shown that interior nudging can be beneficial toward achieving the desired downscaled fields. In this study, the value of nudging will be shown using fields from ModelE that are downscaled using WRF. Several different methods of nudging are explored, and it will be shown that the method of nudging and the choices made with respect to how nudging is used in WRF are critical to balance the constraint of ModelE against the freedom of WRF to develop its own fields.

  20. The FREZCHEM Model

    NASA Astrophysics Data System (ADS)

    Marion, Giles M.; Kargel, Jeffrey S.

    Implementation of the Pitzer approach is through the FREZCHEM (FREEZING CHEMISTRY) model, which is at the core of this work. This model was originally designed to simulate salt chemistries and freezing processes at low temperatures (-54 to 25°C) and 1 atm pressure. Over the years, this model has been broadened to include more chemistries (from 16 to 58 solid phases), a broader temperature range for some chemistries (to 113°C), and incorporation of a pressure dependence (1 to 1000 bars) into the model. Implementation, parameterization, validation, and limitations of the FREZCHEM model are extensively discussed in Chapter 3.