Cameron, M H; Elvik, R
2010-11-01
Nilsson (1981) proposed power relationships connecting changes in traffic speeds with changes in road crashes at various levels of injury severity. Increases in fatal crashes are related to the 4(th) power of the increase in mean speed, increases in serious casualty crashes (those involving death or serious injury) according to the 3(rd) power, and increases in casualty crashes (those involving death or any injury) according to the 2(nd) power. Increases in numbers of crash victims at cumulative levels of injury severity are related to the crash increases plus higher powers predicting the number of victims per crash. These relationships are frequently applied in OECD countries to estimate road trauma reductions resulting from expected speed reductions. The relationships were empirically derived based on speed changes resulting from a large number of rural speed limit changes in Sweden during 1967-1972. Nilsson (2004) noted that there had been very few urban speed limit changes studied to test his power model. This paper aims to test the assumption that the model is equally applicable in all road environments. It was found that the road environment is an important moderator of Nilsson's power model. While Nilsson's model appears satisfactory for rural highways and freeways, the model does not appear to be directly applicable to traffic speed changes on urban arterial roads. The evidence of monotonically increasing powers applicable to changes in road trauma at increasing injury severity levels with changes in mean speed is weak. The estimated power applicable to serious casualties on urban arterial roads was significantly less than that on rural highways, which was also significantly less than that on freeways. Alternative models linking the parameters of speed distributions with road trauma are reviewed and some conclusions reached for their use on urban roads instead of Nilsson's model. Further research is needed on the relationships between serious road trauma and urban speeds. 2010 Elsevier Ltd. All rights reserved.
Spectroscopic factors in the N =20 island of inversion: The Nilsson strong-coupling limit
NASA Astrophysics Data System (ADS)
Macchiavelli, A. O.; Crawford, H. L.; Campbell, C. M.; Clark, R. M.; Cromaz, M.; Fallon, P.; Jones, M. D.; Lee, I. Y.; Richard, A. L.; Salathe, M.
2017-11-01
Spectroscopic factors, extracted from one-neutron knockout and Coulomb dissociation reactions, for transitions from the ground state of 33Mg to the ground-state rotational band in 32Mg, and from 32Mg to low-lying negative-parity states in 31Mg, are interpreted within the rotational model. Associating the ground state of 33Mg and the negative-parity states in 31Mg with the 3/2 [321 ] Nilsson level, the strong coupling limit gives simple expressions that relate the amplitudes (Cj ℓ) of this wave function with the measured cross sections and derived spectroscopic factors (Sj ℓ). To obtain a consistent agreement with the data within this framework, we find that one requires a modified 3/2 [321 ] wave function with an increased contribution from the spherical 2 p3 /2 orbit as compared to a standard Nilsson calculation. This is consistent with the findings of large-scale shell model calculations and can be traced to weak binding effects that lower the energy of low-ℓ orbitals.
Proxy-SU(3) symmetry in heavy deformed nuclei
NASA Astrophysics Data System (ADS)
Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.
2017-06-01
Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.
Strange history: the fall of Rome explained in Hereditas.
Bengtsson, Bengt O
2014-12-01
In 1921 Hereditas published an article on the fall of Rome written by the famous classical scholar Martin P:son Nilsson. Why was a paper on this unexpected topic printed in the newly founded journal? To Nilsson, the demise of the Roman Empire was explained by the "bastardization" occurring between "races" from different parts of the realm. Offspring from mixed couples were of a less stable "type" than their parents, due to the breaking up by recombination of the original hereditary dispositions, which led to a general loss of competence to rule and govern. Thus, the "hardness" of human genes, together with their recombination, was - according to Nilsson - the main cause of the fall of Rome. Nilsson's argument is not particularly convincingly presented. Human "races" are taken to have the same genetic structure as inbred crop strains, and Nilsson believes in a metaphysical unity between the individual and the race to which it belongs. However, in my view, Martin P:son Nilsson and his friend Herman Nilsson-Ehle had wider aims with the article than to explain a historical event. The article can be read as indicating strong support from the classical human sciences to the ambitious new science of genetics. Support is also transferred from genetics to the conservative worldview, where the immutability and inflexibility of the Mendelian genes are used to strengthen the wish for greater stability in politics and life. The strange article in Hereditas can, thus, be read as an early instance in the - still ongoing - tug-of-war between the conservative and the liberal ideological poles over how genetic results best are socially interpreted. © 2015 The Authors.
Angular momentum projection for a Nilsson mean-field plus pairing model
NASA Astrophysics Data System (ADS)
Wang, Yin; Pan, Feng; Launey, Kristina D.; Luo, Yan-An; Draayer, J. P.
2016-06-01
The angular momentum projection for the axially deformed Nilsson mean-field plus a modified standard pairing (MSP) or the nearest-level pairing (NLP) model is proposed. Both the exact projection, in which all intrinsic states are taken into consideration, and the approximate projection, in which only intrinsic states with K = 0 are taken in the projection, are considered. The analysis shows that the approximate projection with only K = 0 intrinsic states seems reasonable, of which the configuration subspace considered is greatly reduced. As simple examples for the model application, low-lying spectra and electromagnetic properties of 18O and 18Ne are described by using both the exact and approximate angular momentum projection of the MSP or the NLP, while those of 20Ne and 24Mg are described by using the approximate angular momentum projection of the MSP or NLP.
Empirical p-n interactions, the synchronized filling of Nilsson orbitals, and emergent collectivity
NASA Astrophysics Data System (ADS)
Cakirli, R. B.
2014-09-01
The onset of collectivity and deformation, changes to the single particle energies and magic numbers and so on are strongly influenced by, for example, proton (p) and neutron (n) interactions inside atomic nuclei. Experimentally, using binding energies (or masses), one can extract an average p-n interaction between the last two protons and the last two neutrons, called δVpn. We have studied δVpn values using calculations of spatial overlaps between p and n Nilsson orbitals, considering different deformations, for the Z= 50-82, N= 82-126 shells, and comparison of these theoretical results with experimental δVpn values. Our results show that enhanced valence p-n interactions are closely correlated with the development of collectivity, shape changes, and the saturation of deformation in nuclei. We note that the difference of the Nilsson quantum numbers of the last filled Nilsson p and n orbitals, has a special relation, 0[110], in which they differ by only a single quantum in the z-direction, for those nuclei where δVpn is largest for each Z in medium mass and heavy nuclei. The synchronised filling of such orbital pairs correlates with the emergence of collectivity.
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
The Tumor Suppressor Actions of the Vitamin D Receptor in Skin
2013-08-01
S. Yu, et al., Basal cell carcinomas inmiceoverexpressing Gli2 in skin, Nature Genetics 24 (3) (2000) 216–217. 61] M. Nilsson, A.B. Unden, D. Krause ...Meineke V, Gartner BC, Wolfgang T, Holick MF, Reichrath J. Analysis of the vitamin D system in cutaneous malignancies. Recent Results Cancer Res...10700170] 63. Nilsson M, Unden AB, Krause D, Malmqwist U, Raza K, Zaphiropoulos PG, Toftgard R. Induction of basal cell carcinomas and trichoepitheliomas
Costello, Fintan; Watts, Paul
2016-01-01
A standard assumption in much of current psychology is that people do not reason about probability using the rules of probability theory but instead use various heuristics or "rules of thumb," which can produce systematic reasoning biases. In Costello and Watts (2014), we showed that a number of these biases can be explained by a model where people reason according to probability theory but are subject to random noise. More importantly, that model also predicted agreement with probability theory for certain expressions that cancel the effects of random noise: Experimental results strongly confirmed this prediction, showing that probabilistic reasoning is simultaneously systematically biased and "surprisingly rational." In their commentaries on that paper, both Crupi and Tentori (2016) and Nilsson, Juslin, and Winman (2016) point to various experimental results that, they suggest, our model cannot explain. In this reply, we show that our probability theory plus noise model can in fact explain every one of the results identified by these authors. This gives a degree of additional support to the view that people's probability judgments embody the rational rules of probability theory and that biases in those judgments can be explained as simply effects of random noise. (c) 2015 APA, all rights reserved).
Nilsson diagrams for light neutron-rich nuclei with weakly-bound neutrons
NASA Astrophysics Data System (ADS)
Hamamoto, Ikuko
2007-11-01
Using Woods-Saxon potentials and the eigenphase formalism for one-particle resonances, one-particle bound and resonant levels for neutrons as a function of quadrupole deformation are presented, which are supposed to be useful for the interpretation of spectroscopic properties of some light neutron-rich nuclei with weakly bound neutrons. Compared with Nilsson diagrams in textbooks that are constructed using modified oscillator potentials, we point out a systematic change of the shell structure in connection with both weakly bound and resonant one-particle levels related to small orbital angular momenta ℓ. Then, it is seen that weakly bound neutrons in nuclei such as C15-19 and Mg33-37 may prefer being deformed as a result of the Jahn-Teller effect, due to the near degeneracy of the 1d5/2-2s1/2 levels and the 1f7/2-2p3/2 levels in the spherical potential, respectively. Furthermore, the absence of some one-particle resonant levels compared with the Nilsson diagrams in textbooks is illustrated.
1986-06-30
approach to the application of theorem proving to problem solving, Aritificial Intelligence 2 (1Q71), 18Q- 208. 4. Fikes, R., Hart, P. and Nilsson, N...by emphasizing the structure of knowledge. 1.2. Planning Literature The earliest work in planning in Artificial Intelligence grew out of the work on...References 1. Newell, A., Artificial Intelligence and the concept of mind, in Computer models of thought and language, Schank, R. and Colby, K. (editor
Isomer-delayed gamma-ray spectroscopy of neutron-rich 166Tb
Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.; ...
2017-09-13
Here, this short paper presents the identification of a metastable, isomeric-state decay in the neutron-rich odd-odd, prolate-deformed nucleus 166Tb. The nucleus of interest was formed using the in-flight fission of a 345 MeV per nucleon 238U primary beam at the RIBF facility, RIKEN, Japan. Gamma-ray transitions decaying from the observed isomeric states in 166Tb were identified using the EURICA gamma-ray spectrometer, positioned at the final focus of the BigRIPS fragments separator. The current work identifies a single discrete gamma-ray transition of energy 119 keV which de-excites an isomeric state in 166Tb with a measured half-life of 3.5(4) μs. The multipolaritymore » assignment for this transition is an electric dipole and is made on the basis internal conversion and decay lifetime arguments. Possible two quasi-particle Nilsson configurations for the initial and final states which are linked by this transition in 166Tb are made on the basis of comparison with Blocked BCS Nilsson calculations, with the predicted ground state configuration for this nucleus arising from the coupling of the v(1-/2)[521] and π(3+/2) Nilsson orbitals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.
Here, this short paper presents the identification of a metastable, isomeric-state decay in the neutron-rich odd-odd, prolate-deformed nucleus 166Tb. The nucleus of interest was formed using the in-flight fission of a 345 MeV per nucleon 238U primary beam at the RIBF facility, RIKEN, Japan. Gamma-ray transitions decaying from the observed isomeric states in 166Tb were identified using the EURICA gamma-ray spectrometer, positioned at the final focus of the BigRIPS fragments separator. The current work identifies a single discrete gamma-ray transition of energy 119 keV which de-excites an isomeric state in 166Tb with a measured half-life of 3.5(4) μs. The multipolaritymore » assignment for this transition is an electric dipole and is made on the basis internal conversion and decay lifetime arguments. Possible two quasi-particle Nilsson configurations for the initial and final states which are linked by this transition in 166Tb are made on the basis of comparison with Blocked BCS Nilsson calculations, with the predicted ground state configuration for this nucleus arising from the coupling of the v(1-/2)[521] and π(3+/2) Nilsson orbitals.« less
Isomer-delayed gamma-ray spectroscopy of neutron-rich 166Tb
NASA Astrophysics Data System (ADS)
Gurgi, L. A.; Regan, P. H.; Söderström, P.-A.; Watanabe, H.; Walker, P. M.; Podolyák, Zs.; Nishimura, S.; Berry, T. A.; Doornenbal, P.; Lorusso, G.; Isobe, T.; Baba, H.; Xu, Z. Y.; Sakurai, H.; Sumikama, T.; Catford, W. N.; Bruce, A. M.; Browne, F.; Lane, G. J.; Kondev, F. G.; Odahara, A.; Wu, J.; Liu, H. L.; Xu, F. R.; Korkulu, Z.; Lee, P.; Liu, J. J.; Phong, V. H.; Yagi, A.; Zhang, G. X.; Alharbi, T.; Carroll, R. J.; Chae, K. Y.; Dombradi, Zs.; Estrade, A.; Fukuda, N.; Griffin, C.; Ideguchi, E.; Inabe, N.; Kanaoka, H.; Kojouharov, I.; Kubo, T.; Kubono, S.; Kurz, N.; Kuti, I.; Lalkovski, S.; Lee, E. J.; Lee, C. S.; Lotay, G.; Moon, C. B.; Nishizuka, I.; Nita, C. R.; Patel, Z.; Roberts, O. J.; Schaffner, H.; Shand, C. M.; Suzuki, H.; Takeda, H.; Terashima, S.; Vajta, Zs.; Kanaya, S.; Valiente-Dobòn, J. J.
2017-09-01
This short paper presents the identification of a metastable, isomeric-state decay in the neutron-rich odd-odd, prolate-deformed nucleus 166Tb. The nucleus of interest was formed using the in-flight fission of a 345 MeV per nucleon 238U primary beam at the RIBF facility, RIKEN, Japan. Gamma-ray transitions decaying from the observed isomeric states in 166Tb were identified using the EURICA gamma-ray spectrometer, positioned at the final focus of the BigRIPS fragments separator. The current work identifies a single discrete gamma-ray transition of energy 119 keV which de-excites an isomeric state in 166Tb with a measured half-life of 3.5(4) μs. The multipolarity assignment for this transition is an electric dipole and is made on the basis internal conversion and decay lifetime arguments. Possible two quasi-particle Nilsson configurations for the initial and final states which are linked by this transition in 166Tb are made on the basis of comparison with Blocked BCS Nilsson calculations, with the predicted ground state configuration for this nucleus arising from the coupling of the v(1-/2)?[521] and ? π(3+/2) Nilsson orbitals.
Revisiting the JDL Model for Information Exploitation
2013-07-01
High-Level Information Fusion Management and Systems Design, Artech House, Norwood, MA, 2012. [10] E. Blasch, D. A. Lambert, P. Valin , M. M. Kokar...Fusion – Fusion2012 Panel Discussion,” Int. Conf. on Info Fusion, 2012. [29] E. P. Blasch, P. Valin , A-L. Jousselme, et al., “Top Ten Trends in High...P. Valin , E. Bosse, M. Nilsson, J. Van Laere, et al., “Implication of Culture: User Roles in Information Fusion for Enhanced Situational
Carbon, Claus-Christian
2016-02-01
Nilsson and Axelsson (2015) made an important contribution by linking recent scientific approaches from the field of empirical aesthetics with everyday demands of museum conservators of deciding which items to be preserved or not. The authors made an important effort in identifying the valuable candidates of variables - but focused on visual properties only and on quite high-expertise aspects of aesthetic quality based on very sophisticated evaluations. The present article responds to the target paper by developing the outline of a more holistic approach for future research as a kind of framework that should assist a multi-modal approach, mainly including tactile sense. © The Author(s) 2016.
Genetics Home Reference: ataxia-pancytopenia syndrome
... brain that coordinates movement (the cerebellum ) and blood-forming cells in the bone marrow . The age when ... J, Gorcenco S, Rundberg Nilsson A, Ripperger T, Kokkonen H, Bryder D, Fioretos T, Henter JI, Möttönen M, ...
Microscopic insight into the structure of gallium isotopes
NASA Astrophysics Data System (ADS)
Verma, Preeti; Sharma, Chetan; Singh, Suram; Bharti, Arun; Khosa, S. K.
2012-07-01
Projected Shell Model technique has been applied to odd-A71-81Ga nuclei with the deformed single-particle states generated by the standard Nilsson potential. Various nuclear structure quantities have been calculated with this technique and compared with the available experimental data in the present work. The known experimental data of the yrast bands in these nuclei are persuasively described and the band diagrams obtained for these nuclei show that the yrast bands in these odd-A Ga isotopes don't belong to the single intrinsic state only but also have multi-particle states. The back-bending in moment of inertia and the electric quadrupole transitions are also calculated.
Helium-induced one-neutron transfer to levels in 162Dy
NASA Astrophysics Data System (ADS)
Andersen, E.; Helstrup, H.; Løvhøiden, G.; Thorsteinsen, T. F.; Guttormsen, M.; Messelt, S.; Tveter, T. S.; Hofstee, M. A.; Schippers, J. M.; van der Werf, S. Y.
1992-12-01
Levels in 162Dy have been studied in the 161Dy(α, 3He) and 163Dy( 3He, α) reactions with 50 MeV α- and 3He-beams from the KVI cyclotron in Groningen. The reaction products were analyzed in the QMG/2 magnetic spectrograph and registered in a two-dimensional detector system. The observed levels and cross sections are well described by the Nilsson model with the exception of the three levels at 1578, 1759 and 1990 keV. The present data combined with previous results strongly indicate that these levels are the spin-4, -6, and -8 members of the S-band.
Relative properties of smooth terminating bands
NASA Astrophysics Data System (ADS)
Afanasjev, A. V.; Ragnarsson, I.
1998-01-01
The relative properties of smooth terminating bands observed in the A ∼ 110 mass region are studied within the effective alignment approach. Theoretical values of ietf are calculated using the configuration-dependent shell-correction model with the cranked Nilsson potential. Reasonable agreement with experiment shows that previous interpretations of these bands are consistent with the present study. Contrary to the case of superdeformed bands, the effective alignments of these bands deviate significantly from the pure single-particle alignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nag, Somnath; Singh, A. K.; Hagemann, G. B.
In this paper, high-spin states in 124Xe have been populated using the 80Se( 48Ca, 4n) reaction at a beam energy of 207 MeV and high-multiplicity, γ-ray coincidence events were measured using the Gammasphere spectrometer. Six high-spin rotational bands with moments of inertia similar to those observed in neighboring nuclei have been observed. The experimental results are compared with calculations within the framework of the Cranked Nilsson-Strutinsky model. Finally, it is suggested that the configurations of the bands involve excitations of protons across the Z = 50 shell gap coupled to neutrons within the N = 50 - 82 shell ormore » excited across the N = 82 shell closure.« less
Geomorphic classification of rivers
J. M. Buffington; D. R. Montgomery
2013-01-01
Over the last several decades, environmental legislation and a growing awareness of historical human disturbance to rivers worldwide (Schumm, 1977; Collins et al., 2003; Surian and Rinaldi, 2003; Nilsson et al., 2005; Chin, 2006; Walter and Merritts, 2008) have fostered unprecedented collaboration among scientists, land managers, and stakeholders to better understand,...
Acoustic Sensor Network Design for Position Estimation
2009-05-01
A., Pollock, S., Netter, B., and Low, B. S. 2005. Anisogamy, expenditure of reproductive effort, and the optimality of having two sexes. Operations...Research 53, 3, 560–567. Evans, M., Hastings, N., and Peacock , B. 2000. Statistical distributions. Ed. Wiley & Sons. New York. Feeney, L. and Nilsson, M
Complexity of the Generalized Mover’s Problem.
1985-01-01
problem by workers in the robotics fields and in artificial intellegence , (for example [Nilson, 69], [Paul, 72], (Udupa, 77], [Widdoes, 74], [Lozano-Perez...Nilsson, "A mobile automation: An application of artificial intelligence techniques," Proceedings TJCAI-69, 509-520, 1969. . -7-- -17- C. O’Dunlaing, M
ERIC Educational Resources Information Center
Ng, Kok-Mun; Smith, Shannon D.
2012-01-01
This research partially replicated Nilsson and Anderson's "Professional Psychology: Research and Practice" (2004) study on training and supervising international students. It investigated the relationships among international counseling students' training level, acculturation, supervisory working alliance (SWA), counseling self-efficacy (COSE),…
The EUA Institutional Evaluation Programme: An Account of Institutional Best Practices
ERIC Educational Resources Information Center
Rosa, Maria Joao; Cardoso, Sonia; Dias, Diana; Amaral, Alberto
2011-01-01
When evaluating the EUA Institutional Evaluation Programme (IEP), Nilsson "et al." emphasised the interest in creating a data bank on good practices derived from its reports that would contribute to disseminating examples of effective quality management practices and to supporting mutual learning among universities. In IEP, evaluated…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
... potential to produce allergic contact dermatitis (ACD). NICEATM also requests data generated using in vivo...]m MA, B[ouml]rje A, Luthman, K, Nilsson JLG. 2008. Allergic Contact Dermatitis--Formation... Identification of Contact Allergens: Request for Comments and Data AGENCY: Division of the National Toxicology...
We present results from a monthly SPI and water quality survey of nine stations along a transect in the Pensacola Bay estuary spanning the salinity gradient from Escambia River to the Gulf of Mexico. We evaluated Benthic Habitat Quality (Nilsson and Rosenberg 1997) derived from s...
We present results from a monthly sediment and water quality survey of nine stations along a transect in the Pensacola Bay estuary spanning the salinity gradient from Escambia River to the Gulf of Mexico. We evaluated Benthic Habitat Quality (Nilsson and Rosenberg 1997) derived f...
The Expansion of the Education Sector in Sweden During the 20th Century.
ERIC Educational Resources Information Center
Ohlsson, Rolf
1985-01-01
Three investigations on quantitative changes in higher education in Sweden are described. In Anders Nilsson's dissertation, "Study Financing and Social Recruitment to Higher Education (1920-1976)," attention was focused on changes in college recruitment from 1920 until reforms in 1977; the effect of various college financing conditions…
Isomer spectroscopy of neutron-rich $$^{165,167}$$Tb
Gurgi, L. A.; Regan, P. H.; Soderstrom, P. -A.; ...
2017-01-01
We present information on the excited states in the prolate-deformed, neutron-rich nuclei 165,167Tb 100,102. The nuclei of interest were synthesised following in-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm 9Be target at the Radioactive Ion-Beam Factory (RIBF), RIKEN, Japan. The exotic nuclei were separated and identified event-by-event using the BigRIPS separator, with discrete energy gamma-ray decays from isomeric states with half-lives in the μs regime measured using the EURICA gamma-ray spectrometer. Metastable-state decays are identified in 165Tb and 167Tb and interpreted as arising from hindered E1 decay from the 72 –[523] single quasi-protonmore » Nilsson configuration to rotational states built on the 32 –[411] single quasi-proton ground state. Lastly, these data correspond to the first spectroscopic information in the heaviest, odd-A terbium isotopes reported to date and provide information on proton Nilsson configurations which reside close to the Fermi surface as the 170Dy doubly-midshell nucleus is approached.« less
NASA Astrophysics Data System (ADS)
Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.
2011-12-01
A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.
Observation of high-spin bands with large moments of inertia in Xe 124
Nag, Somnath; Singh, A. K.; Hagemann, G. B.; ...
2016-09-07
In this paper, high-spin states in 124Xe have been populated using the 80Se( 48Ca, 4n) reaction at a beam energy of 207 MeV and high-multiplicity, γ-ray coincidence events were measured using the Gammasphere spectrometer. Six high-spin rotational bands with moments of inertia similar to those observed in neighboring nuclei have been observed. The experimental results are compared with calculations within the framework of the Cranked Nilsson-Strutinsky model. Finally, it is suggested that the configurations of the bands involve excitations of protons across the Z = 50 shell gap coupled to neutrons within the N = 50 - 82 shell ormore » excited across the N = 82 shell closure.« less
Superdeformation in the a Approximately 190 Mass Region and Shape Coexistence in LEAD-194
NASA Astrophysics Data System (ADS)
Brinkman, Matthew James
Near-yrast states in ^{194 }Pb have been identified up to a spin of {~}35hbar following the ^{176}Yb(^ {24}Mg,6n)^{194} Pb^{*} reaction at a beam energy of 134 MeV, measured with the High Energy -Resolution Array located at the Lawrence Berkeley Laboratory 88-Inch Cyclotron facility. Eighteen new transitions were placed. Examples of non-collective prolate and oblate and collective oblate excitations are seen. In addition a rotational band consisting of twelve transitions, with energy spacings characteristic of superdeformed shapes, were also seen. These results have been interpreted using both Nilsson model calculations and previously published potential energy surface calculations. The superdeformed bands in the A ~ 190 mass region are discussed with primary emphasis on ten superdeformed bands in ^{192,193,194 }Hg and ^{192,194,196,198 }Pb discovered or codiscovered by our collaboration. The discussion of superdeformation in these nuclei have been broken into three portions, focusing on the population of, the physics associated with, and the depopulation of these bands, respectively. The population behavior of the superdeformed structures is presented, and discussed with respect to theoretical predictions for nuclei near A ~ 190 expected to support superdeformation. A detailed analysis of the population of the ^{193} Hg^{rm 1a} band is provided, and the results are compared with statistical model calculations predictions. Significant differences were found between the population of the superdeformed bands in the A ~ 150 and 190 mass regions. The systematics of the intraband region are presented. Nilsson model calculations are carried out, with nucleon configurations for the primary superdeformed bands proposed. A discussion of possible mechanisms for reproducing the smooth increase in dynamic moments of inertia observed in all superdeformed bands in this mass region is provided. A number of superdeformed bands in the A ~ 190 mass region have transition energies that are related to those of ^{192}Hg. This behavior is discussed in light of proposed theoretical explanations. The systematic behavior of the depopulation with respect to neutron and proton number is discussed. A comparison of observed depopulation behavior with recently published predictions is provided, showing the predictions in excellent qualitative agreement with the observed depopulation patterns.
Operations Monitoring Assistant System Design
1986-07-01
Logic. Artificial Inteligence 25(1)::75-94. January.18. 41 -Nils J. Nilsson. Problem-Solving Methods In Artificli Intelligence. .klcG raw-Hill B3ook...operations monitoring assistant (OMA) system is designed that combines operations research, artificial intelligence, and human reasoning techniques and...KnowledgeCraft (from Carnegie Group), and 5.1 (from Teknowledze). These tools incorporate the best methods of applied artificial intelligence, and
Simultaneous Planning and Control for Autonomous Ground Vehicles
2009-02-01
these applications is called A * ( A -star), and it was originally developed by Hart, Nilsson, and Raphael [HAR68]. Their research presented the formal...sequence, rather than a dynamic programming approach. A * search is a technique originally developed for Artificial Intelligence 43 applications ... developed at the Center for Intelligent Machines and Robotics, serves as a platform for the implementation and testing discussed. autonomous
Where to cut, where to run : prospects for U.S. South softwood timber supplies and prices
Henry Spelter
1999-01-01
A review of market history shows that southern pine sawtimber stumpage prices have increased by over 150 percent in this decade (Timber Mart South). Concurrently, some (i.e. Cubbage and Abt (1996) Nilsson et al (1999)) have questioned the adequacy of southern timber supplies to meet projected demands, which are projected to increase by...
Collective and non-collective structures in nuclei of mass region A ≈ 125
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, A. K.; Collaboration: INGA Collaboration; Gammasphere Collaboration
Generation of angular momentum in nuclei is a key question in nuclear structure studies. In single particle model, it is due to alignment of spin of individual nucleon available in the valence space, whereas coherent motion of nucleons are assumed in the collective model. The nuclei near the closed shell at Z = 50 with mass number A ≈ 120-125 represent ideal cases to explore the interplay between these competing mechanisms and the transition from non-collective to collective behavior or vice versa. Recent spectroscopic studies of nuclei in this region reveal several non-collective maximally aligned states representing the first kindmore » of excitation mechanism, where 8-12 particles above the {sup 114}Sn align their spins to generate these states. Deformed rotational bands feeding the non-collective states in the spin range I=20-25 and excitation energies around 10 MeV have also been observed. Structure of the collective and non-collective states are discussed in the framework of Cranked-Nilsson-Strutinsky model.« less
Compiling Planning into Quantum Optimization Problems: A Comparative Study
2015-06-07
and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum
Laser System Usage in the Marine Environment: Applications and Environmental Considerations
2010-12-01
publications/pubs/index.html. Released by Bart Chadwick, Head Environmental Sciences Branch Under authority of Martin Machniak, Head Research...Nilsson and Lindstrom , 1983; Shelton, Gaten, and Chapman, 1985). Data on the effects of laser energy to corals also are lacking, although it can be...L. and M. Lindstrom . 1983. “Retinal Damage and Sensitivity Loss of a Light- Sensitive Crustacean Compound Eye (Cirolana borealis): Electron
Shape evolution with angular momentum in Lu isotopes
NASA Astrophysics Data System (ADS)
Kardan, Azam; Sayyah, Sepideh
2016-06-01
The nuclear potential energies of Lu isotopes with neutron number N = 90 - 98 up to high spins are computed within the framework of the unpaired cranked Nilsson-Strutinsky method. The potential and the macroscopic Lublin-Strasbourg drop (LSD) energy-surface diagrams are analyzed in terms of quadrupole deformation and triaxiality parameter. The shape evolution of these isotopes with respect to angular momentum, as well as the neutron number is studied.
1989-10-01
Encontro Portugues de Inteligencia Artificial (EPIA), Oporto, Portugal, September 1985. [15] N. J. Nilsson. Principles Of Artificial Intelligence. Tioga...FI1 F COPY () RADC-TR-89-259, Vol II (of twelve) Interim Report October 1969 AD-A218 154 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL...7a. NAME OF MONITORING ORGANIZATION Northeast Artificial Of p0ilcabe) Intelligence Consortium (NAIC) Rome_____ Air___ Development____Center
Advanced Methods of Approximate Reasoning
1990-11-30
about Knowledge and Action. Technical Note 191, Menlo Park, California: SRI International. 1980 . 20 [26] N.J. Nilsson. Probabilistic logic. Artificial...reasoning. Artificial Intelligence, 13:81-132, 1980 . S[30 R. Reiter. On close world data bases. In H. Gallaire and J. Minker, editors, Logic and Data...specially grateful to Dr. Abraham Waksman of the Air Force Office of Scientific Research and Dr. David Hislop of the Army Research Office for their
Phase transition at N = 92 in 158Dy
NASA Astrophysics Data System (ADS)
Gupta, J. B.
2016-09-01
Beyond the shape phase transition from the spherical vibrator to the deformed rotor regime at N = 90, the interplay of β- and γ-degrees of freedom becomes important, which affects the relative positions of the Kπ = 0+β- and Kπ = 2+γ-bands. In the microscopic approach of the dynamic pairing plus quadrupole model, a correlation of the strength of the quadrupole force and the formation of the β- and γ-bands in 158Dy is described. The role of the potential energy surface is illustrated. The E2 transition rates in the lower three K-bands and the multi-phonon bands with Kπ = 0+, 2+ and 4+ are well reproduced. The absolute B(E2, 2i+ = 0 2+) (i = 2, 3) serves as a good measure of the quadrupole strength. The role of the single particle Nilsson orbits is also described.
Absence of paired crossing in the positive parity bands of 124Cs
NASA Astrophysics Data System (ADS)
Singh, A. K.; Basu, A.; Nag, Somnath; Hübel, H.; Domscheit, J.; Ragnarsson, I.; Al-Khatib, A.; Hagemann, G. B.; Herskind, B.; Elema, D. R.; Wilson, J. N.; Clark, R. M.; Cromaz, M.; Fallon, P.; Görgen, A.; Lee, I.-Y.; Ward, D.; Ma, W. C.
2018-02-01
High-spin states in 124Cs were populated in the 64Ni(64Ni,p 3 n ) reaction and the Gammasphere detector array was used to measure γ -ray coincidences. Both positive- and negative-parity bands, including bands with chiral configurations, have been extended to higher spin, where a shape change has been observed. The configurations of the bands before and after the alignment are discussed within the framework of the cranked Nilsson-Strutinsky model. The calculations suggest that the nucleus undergoes a shape transition from triaxial to prolate around spin I ≃22 of the positive-parity states. The alignment gain of 8 ℏ , observed in the positive-parity bands, is due to partial alignment of several valence nucleons. This indicates the absence of band crossing due to paired nucleons in the bands.
CryoSat-2 Processing and Model Interpretation of Greenland Ice Sheet Volume Changes
NASA Astrophysics Data System (ADS)
Nilsson, J.; Gardner, A. S.; Sandberg Sorensen, L.
2015-12-01
CryoSat-2 was launched in late 2010 tasked with monitoring the changes of the Earth's land and sea ice. It carries a novel radar altimeter allowing the satellite to monitor changes in highly complex terrain, such as smaller ice caps, glaciers and the marginal areas of the ice sheets. Here we present on the development and validation of an independent elevation retrieval processing chain and respective elevation changes based on ESA's L1B data. Overall we find large improvement in both accuracy and precision over Greenland relative to ESA's L2 product when comparing against both airborne data and crossover analysis. The seasonal component and spatial sampling of the surface elevation changes where also compared against ICESat derived changes from 2003-2009. The comparison showed good agreement between the to product on a local scale. However, a global sampling bias was detected in the seasonal signal due to the clustering of CryoSat-2 data in higher elevation areas. The retrieval processing chain presented here does not correct for changes in surface scattering conditions and appears to be insensitive to the 2012 melt event (Nilsson et al., 2015). This in contrast to the elevation changes derived from ESA's L2 elevation product, which where found to be sensitive to the effects of the melt event. The positive elevation bias created by the event introduced a discrepancy between the two products with a magnitude of roughly 90 km3/year. This difference can directly be attributed to the differences in retracking procedure pointing to the importance of the retracking of the radar waveforms for altimetric volume change studies. Greenland 2012 melt event effects on CryoSat-2 radar altimetry./ Nilsson, Johan; Vallelonga, Paul Travis; Simonsen, Sebastian Bjerregaard; Sørensen, Louise Sandberg; Forsberg, René; Dahl-Jensen, Dorthe; Hirabayashi, Motohiro; Goto-Azuma, Kumiko; Hvidberg, Christine S.; Kjær, Helle A.; Satow, Kazuhide.
A microscopic explanation of the isotonic multiplet at N=90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, J. B., E-mail: jbgupta2011@gmail.com
2014-08-14
The shape phase transition from spherical to soft deformed at N=88-90 was observed long ago. After the prediction of the X(5) symmetry, for which analytical solution of the nuclear Hamiltonian is given [1], good examples of X(5) nuclei were identified in the N=90 isotones of Nd, Sm, Gd and Dy, in the recent works. The N=90 isotones have almost the similar deformed level structure, forming the isotonic multiplet in Z=50-66, N=82-104 quadrant. This is explained microscopically in terms of the Nilsson level diagram. Using the Dynamic Pairing-Plus-Quadrupole model of Kumar-Baranger, the quadrupole deformation and the occupancies of the neutrons andmore » protons in these nuclei have been calculated, which support the formation of N=88, 90 isotonic multiplets. The existence of F-spin multiplets in Z=66-82, N=82-104 quadrant, identified in earlier works on the Interacting Boson Model, is also explained in our study.« less
Emerging Concepts for Integrating Human and Environmental Water Needs in River Basin Management
2005-09-01
SCOWAR concluded (Naiman et al . 2002 ): “the major challenge to freshwater management is to place water resource development within the context of...rate. It has been suggested (Naiman et al . 2002 ) that there are three overarching ecological principles for water resources management. These are...been expanded into another six key principles by Bunn and Arthington ( 2002 ), Nilsson and Svedmark ( 2002 ), and Pinay et al . ( 2002 ): a. Flow is a major
2011-03-01
zirconium. For the standard, Brayton open-cycle, gas turbine, typical of modern aircraft power plants, the thermodynamic efficiency is heavily driven by...linearize the radiation emission term around Ti,j0 from a previous the previous step, Taylor expand, and rearrange Eq. (23) in terms of Ti,j to apply as...York: Wiley. 2004. Nilsson, J. W., and Riedel, S. A. Electric Circuits. Prentice Hall. 2007. 512 Noda, N. Thermal Stresses. Taylor & Francis. 2002
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.
In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identifiedmore » using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z=65) studied to date. Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.
In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identifiedmore » using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr 3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z = 65) studied to date. Here, Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.« less
Screening for and Inheritance of Resistance to Barley Leaf Stripe (Drechslera graminea),
1987-12-01
JORGENSEN, J.H. (1986,. Field assessment of partial resistance to powdery mildew in spring barley . Euphytica 35, 233-243. KRISTIANSSON, B. and NILSSON, B...the Laevigatum powdery mildew resistance via ’Vada’ and ’Minerva’. This suggests this resistance to occur in many varieties descending from ’Vada...kept free from powdery mildew by spraying with Bayleton (25% triadimefon WP) both in the greenhouse and in the field. This fungicide does not affect the
Defining ecosystem flow requirements for the Bill Williams River, Arizona
Shafroth, Patrick B.; Beauchamp, Vanessa B.
2006-01-01
Alteration of natural river flows resulting from the construction and operation of dams can result in substantial changes to downstream aquatic and bottomland ecosystems and undermine the long-term health of native species and communities (for general review, cf. Ward and Stanford, 1995; Baron and others, 2002; Nilsson and Svedmark, 2002). Increasingly, land and water managers are seeking ways to manage reservoir releases to produce flow regimes that simultaneously meet human needs and maintain the health and sustainability of downstream biotaa.
DC Characteristics of InAs/AlSb HEMTs at Cryogenic Temperatures
2009-05-01
Molecular Beam Epitaxy - MBE XIV, April 2007, Volumes 301- 302, Pages 1025-1029 Fig. 5: SEM image showing the 2x50μm InAs/AlSb HEMT . 325 ...started with a heterostructure grown by molecular beam epitaxy on a semi- insulating InP substrate. The heterostructure is shown in Fig. 1. Mesa isolation...DC characteristics of InAs/AlSb HEMTs at cryogenic temperatures G. Moschetti, P-Å Nilsson, N. Wadefalk, M. Malmkvist, E. Lefebvre, J. Grahn
VizieR Online Data Catalog: SDSS optically selected BL Lac candidates (Kuegler+, 2014)
NASA Astrophysics Data System (ADS)
Kuegler, S. D.; Nilsson, K.; Heidt, J.; Esser, J.; Schultz, T.
2014-11-01
The data that we use for variability and host galaxy analysis were presented in Paper I (Heidt & Nilsson, 2011A&A...529A.162H, Cat. J/A+A/529/A162). Alltogether, 123 targets were observed at the ESO New Technology Telescope (NTT) on La Silla, Chile during Oct. 2-6, 2008 and Mar. 28-Apr. 1, 2009. The observations were made with the EFOSC2 instrument through a Gunn-r filter (#786). (2 data files).
Isomer spectroscopy of neutron-rich 168Tb 103
Gurgi, L. A.; Regan, P. H.; Söderström, P. -A.; ...
2016-12-29
In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identifiedmore » using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr 3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z = 65) studied to date. Here, Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.« less
Isomer spectroscopy of neutron-rich 168Tb103
NASA Astrophysics Data System (ADS)
Gurgi, L. A.; Regan, P. H.; Söderström, P.-A.; Watanabe, H.; Walker, P. M.; Podolyák, Zs.; Nishimura, S.; Berry, T. A.; Doornenbal, P.; Lorusso, G.; Isobe, T.; Baba, H.; Xu, Z. Y.; Sakurai, H.; Sumikama, T.; Catford, W. N.; Bruce, A. M.; Browne, F.; Lane, G. J.; Kondev, F. G.; Odahara, A.; Wu, J.; Liu, H. L.; Xu, F. R.; Korkulu, Z.; Lee, P.; Liu, J. J.; Phong, V. H.; Yag, A.; Zhang, G. X.; Alharbi, T.; Carroll, R. J.; Chae, K. Y.; Dombradi, Zs.; Estrade, A.; Fukuda, N.; Griffin, C.; Ideguchi, E.; Inabe, N.; Kanaoka, H.; Kojouharov, I.; Kubo, T.; Kubono, S.; Kurz, N.; Kuti, I.; Lalkovski, S.; Lee, E. J.; Lee, C. S.; Lotay, G.; Moon, C.-B.; Nishizuka, I.; Nita, C. R.; Patel, Z.; Roberts, O. J.; Schaffner, H.; Shand, C. M.; Suzuki, H.; Takeda, H.; Terashima, S.; Vajta, Zs.; Yoshida, S.; Valiente-Dòbon, J. J.
2017-11-01
In-flight fission of a 345 MeV per nucleon 238U primary beam on a 2 mm thick 9Be target has been used to produce and study the decays of a range of neutron-rich nuclei centred around the doubly mid-shell nucleus 170Dy at the RIBF Facility, RIKEN, Japan. The produced secondary fragments of interest were identified event-by-event using the BigRIPS separator. The fragments were implanted into the WAS3ABI position sensitive silicon active stopper which allowed pixelated correlations between implants and their subsequent β-decay. Discrete γ-ray transitions emitted following decays from either metastable states or excited states populated following beta decay were identified using the 84 coaxial high-purity germanium (HPGe) detectors of the EURICA spectrometer, which was complemented by 18 additional cerium-doped lanthanum bromide (LaBr3) fast-timing scintillation detectors from the FATIMA collaboration. This paper presents the internal decay of a metastable isomeric excited state in the odd-odd nucleus 168Tb, which corresponds to a single proton-neutron hole configuration in the valence maximum nucleus 170Dy. These data represent the first information on excited states in this nucleus, which is the most neutron-rich odd-odd isotope of terbium (Z=65) studied to date. Nilsson configurations associated with an axially symmetric, prolate-deformed nucleus are proposed for the 168Tb ground state the observed isomeric state by comparison with Blocked BCS-Nilsson calculations.
Rönnberg, Jerker; Danielsson, Henrik; Rudner, Mary; Arlinger, Stig; Sternäng, Ola; Wahlin, Ake; Nilsson, Lars-Göran
2011-04-01
To test the relationship between degree of hearing loss and different memory systems in hearing aid users. Structural equation modeling (SEM) was used to study the relationship between auditory and visual acuity and different cognitive and memory functions in an age-hetereogenous subsample of 160 hearing aid users without dementia, drawn from the Swedish prospective cohort aging study known as Betula (L.-G. Nilsson et al., 1997). Hearing loss was selectively and negatively related to episodic and semantic long-term memory (LTM) but not short-term memory (STM) performance. This held true for both ears, even when age was accounted for. Visual acuity alone, or in combination with auditory acuity, did not contribute to any acceptable SEM solution. The overall relationships between hearing loss and memory systems were predicted by the ease of language understanding model (J. Rönnberg, 2003), but the exact mechanisms of episodic memory decline in hearing aid users (i.e., mismatch/disuse, attentional resources, or information degradation) remain open for further experiments. The hearing aid industry should strive to design signal processing algorithms that are cognition friendly.
Nilsson, Håkan; Juslin, Peter; Winman, Anders
2016-01-01
Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). (c) 2015 APA, all rights reserved).
AFOSR Indo-UK -US Joint Physics Initiative for Study of Angular Optical Mode Fiber Amplification
2017-02-20
AFRL -AFOSR-UK-TR-2017-0011 AFOSR Indo-UK -US Joint Physics Initiative for study of angular optical mode fiber amplification Johan Nilsson UNIVERSITY...ES) EOARD Unit 4515 APO AE 09421-4515 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL /AFOSR IOE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) AFRL -AFOSR-UK-TR-2017-0011...this travel, he had the opportunity to visit the Kirtland Air Force Base and interact with Dr Leanne Henry as well as Dr Iyad Dajani to discuss
Decay properties of Bk24397 and Bk24497
NASA Astrophysics Data System (ADS)
Ahmad, I.; Kondev, F. G.; Greene, J. P.; Zhu, S.
2018-01-01
Electron capture decays of 243Bk and 244Bk have been studied by measuring the γ -ray spectra of mass-separated sources and level structures of 243Cm and 244Cm have been deduced. In 243Cm, the electron capture population to the ground state, 1 /2+[631 ] , and 1 /2+[620 ] Nilsson states have been observed. The octupole Kπ=2- band was identified in 244Cm at 933.6 keV. In addition, spins and parities were deduced for several other states and two-quasiparticle configurations have been tentatively assigned to them.
Chen, Shu-Ching; Chen, Hsiu-Fang; Peng, Hsi-Ling; Lee, Li-Yun; Chiang, Ting-Yu; Chiu, Hui-Chuan
2017-04-01
The purposes of this study were to evaluate the psychometric properties, reliability, and validity of the Chinese-version Glover-Nilsson Smoking Behavioral Questionnaire (GN-SBQ-C) and assess the behavioral nicotine dependence among community-dwelling adult smokers in Taiwan. The methods used were survey design, administration, and validation. A total of 202 adult smokers completed a survey to assess behavioral dependence, nicotine dependence, depression, social support, and demographic and smoking characteristics. Data analysis included descriptive statistics, internal consistency reliability, t test, exploratory factor analysis, independent t test, and Pearson product moment correlation. The results showed that (1) the GN-SBQ-C has good internal consistency reliability and stability (2-week test-retest reliability); (2) the extracted one factor explained 41.80 % of the variance, indicating construct validity; (3) the scale has acceptable concurrent validity, with significant positive correlation between the GN-SBQ-C and nicotine dependence, depression, and time smoking and negative correlation between the GN-SBQ-C and age and exercise habit; and (4) the instrument has discriminant validity, supported by significant differences between those with high and low-to-moderate nicotine dependence, smokers greater than 43 years old and those 43 years old and younger, and those who smoked 10 years or less and those smoking more than 10 years. The 11-item GN-SBQ-C has satisfactory psychometric properties when applied in a sample of Taiwanese adult smokers. The scale is feasible and valid to use to assess smoking behavioral dependence.
Skyrme RPA description of γ-vibrational states in rare-earth nuclei
NASA Astrophysics Data System (ADS)
Nesterenko, V. O.; Kartavenko, V. G.; Kleinig, W.; Kvasil, J.; Repko, A.; Jolos, R. V.; Reinhard, P.-G.
2016-01-01
The lowest γ-vibrational states with Kπ = 2+γ in well-deformed Dy, Er and Yb isotopes are investigated within the self-consistent separable quasiparticle random-phase-approximation (QRPA) approach based on the Skyrme functional. The energies Eγ and reduced transition probabilities B(E2)γ of the states are calculated with the Skyrme force SV-mas10. We demonstrate the strong effect of the pairing blocking on the energies of γ-vibrational states. It is also shown that collectivity of γ-vibrational states is strictly determined by keeping the Nilsson selection rules in the corresponding lowest 2qp configurations.
1986-03-21
i t a t i v e frameworks (e.g., Doyle, Toulmin , P . Cohen), and e f f o r t s t o syn thes i ze l o g i c and p r o b a b i l i t y (Nilsson...logic allows for provisional acceptance of uncer- tain premises, which may later be retracted when they lead to contradictory conclusions. Toulmin (1958...A1 researchers] have accepted without hesitation as impeccable." * The basic framework of an argument, according to Toulmin , is as follows ( Toulmin
Decay properties of Bk 97 243 and Bk 97 244
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, I.; Kondev, F. G.; Greene, J. P.
2018-01-01
Electron capture decays of Bk-243 and Bk-244 have been studied by measuring the gamma-ray spectra of mass-separated sources and level structures of Cm-243 and Cm-244 have been deduced. In Cm-243, the electron capture population to the ground state, 1/2(+)[631], and 1/2(+)[620] Nilsson states have been observed. The octupole K-pi = 2(-) band was identified in Cm-244 at 933.6 keV. In addition, spins and parities were deduced for several other states and two-quasiparticle configurations have been tentatively assigned to them
NASA Astrophysics Data System (ADS)
Kruk, D.; Kowalewski, J.; Tipikin, D. S.; Freed, J. H.; Mościcki, M.; Mielczarek, A.; Port, M.
2011-01-01
The "Swedish slow motion theory" [Nilsson and Kowalewski, J. Magn. Reson. 146, 345 (2000)] applied so far to Nuclear Magnetic Relaxation Dispersion (NMRD) profiles for solutions of transition metal ion complexes has been extended to ESR spectral analysis, including in addition g-tensor anisotropy effects. The extended theory has been applied to interpret in a consistent way (within one set of parameters) NMRD profiles and ESR spectra at 95 and 237 GHz for two Gd(III) complexes denoted as P760 and P792 (hydrophilic derivatives of DOTA-Gd, with molecular masses of 5.6 and 6.5 kDa, respectively). The goal is to verify the applicability of the commonly used pseudorotational model of the transient zero field splitting (ZFS). According to this model the transient ZFS is described by a tensor of a constant amplitude, defined in its own principal axes system, which changes its orientation with respect to the laboratory frame according to the isotropic diffusion equation with a characteristic time constant (correlation time) reflecting the time scale of the distortional motion. This unified interpretation of the ESR and NMRD leads to reasonable agreement with the experimental data, indicating that the pseudorotational model indeed captures the essential features of the electron spin dynamics.
CMCSN: Structure and dynamics of water and aqueous solutions in materials science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Car, Roberto; Galli, Giulia; Rehr, John J.
This award has contributed to build a network of scientists interested in the structure and dynamics of water. Such network extends well beyond the PI and the co-PIs and includes both theoreticians and experimentalists. Scientific interactions within this community have been fostered by three workshops supported by the grant. The first workshop was held at Princeton University on December 6-8, 2010. The second workshop was held at the Talaris Conference Center in Seattle on February 10-12, 2012. The third workshop was held at UC Davis on June 19-22, 2013. Each workshop had 40-50 participants and about 20 speakers. The workshopsmore » have been very successful and stimulated ongoing discussions within the water community. This debate is lasting beyond the time frame set by the grant. The following events are just a few examples: (i) the month long activity on "Water: the most anomalous liquid" organized at NORDITA (Stockholm) in October- November 2014 by A. Nilsson and L. Petterson who participated in all the three CMCSN sponsored workshops; (ii) the workshop on "ice nucleation" organized by R. Car, P. Debenedetti and F. Stillinger at the Princeton Center for Theoretical Science in April 23- 24 2015; (iii) the 10 days workshop on water organized by R. Car and F. Mallamace at the E. Maiorana Centre in Erice (Sicily) in July 2016, an activity that will morph into a regular summer school of the E. Maiorana Centre in the years to come under the directorship of R. Car, F. Mallamace (U. Messina), A. Nilsson (U. Stockholm) and L. Xu (Beijing U.). All these activities were stimulated by the scientific discussions within the network initiated by this CMCSN grant.« less
Gogniat, Marissa Ann; Robinson, Talia Loren; Mewborn, Catherine Mattocks; Jean, Kharine Renee; Miller, L Stephen
2018-04-22
Obesity is a growing concern worldwide because of its adverse health effects, including its negative impact on cognitive functioning. This concern is especially relevant for older adults, who are already likely to experience some cognitive decline and loss of brain volume due to aging, (Gea et al., 2002). However, there is some evidence that higher body mass index (BMI) may actually be protective in later life (Hughes et al., 2009; Luchsinger et al., 2007; Nilsson and Nilsson, 2009; Sturman et al., 2008). Therefore, the purpose of the current study was to assess the relationship between BMI and neuropsychological functioning in older adults, and concurrently the relationship between BMI and brain volume. Older adults (N = 88) reported height and weight to determine BMI (M = 26.5) based on Centers for Disease Control and Prevention (CDC) guidelines. Cognitive function was assessed with the Repeatable Battery for Assessment of Neuropsychological Status (RBANS). Brain volume measurements were evaluated via structural MRI. Results indicated no association between BMI and neuropsychological functioning. There was a significant association between BMI and total grey matter volume while controlling for age and years of education (β = 0.208, p = .026, ΔR 2 = 0.043), indicating that as BMI increased, brain volume in these areas modestly increased. However, these results did not survive multiple comparison corrections and were further attenuated to near significance when sex was explicitly added as an additional covariate. Nevertheless, while replication is clearly needed, these results suggest that moderately greater BMI in later life may modestly attenuate concomitant grey matter volume decline. Copyright © 2018 Elsevier B.V. All rights reserved.
Potential responses of riparian vegetation to dam removal
Shafroth, P.B.; Friedman, J.M.; Auble, G.T.; Scott, M.L.; Braatne, J.H.
2002-01-01
Throughout the world, riparian habitats have been dramatically modified from their natural condition. Dams are one of the principal causes of these changes, because of their alteration of water and sediment regimes (Nilsson and Berggren 2000). Because of the array of ecological goods and services provided by natural riparian ecosystems (Naiman and Decamps 1997), their conservation and restoration have become the focus of many land and water managers. Efforts to restore riparian habitats and other riverine ecosystems have included the management of flow releases downstream of dams to more closely mimic natural flows (Poff et al. 1997), but dam removal has received little attention as a possible approach to riparian restoration.
Triaxiality and Exotic Rotations at High Spins in 134Ce
Petrache, C. M.; Guo, S.; Ayangeakaa, A. D.; ...
2016-06-06
High-spin states in Ce-134 have been investigated using the Cd-116(Ne-22,4n) reaction and the Gammasphere array. The level scheme has been extended to an excitation energy of similar to 30 MeV and spin similar to 54 (h) over bar. Two new dipole bands and four new sequences of quadrupole transitions were identified. Several new transitions have been added to a number of known bands. One of the strongly populated dipole bands was revised and placed differently in the level scheme, resolving a discrepancy between experiment and model calculations reported previously. Configurations are assigned to the observed bands based on cranked Nilsson-Strutinskymore » calculations. A coherent understanding of the various excitations, both at low and high spins, is thus obtained, supporting an interpretation in terms of coexistence of stable triaxial, highly deformed, and superdeformed shapes up to very high spins. Rotations around different axes of the triaxial nucleus, and sudden changes of the rotation axis in specific configurations, are identified, further elucidating the nature of high-spin collective excitations in the A = 130 mass region.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrache, C. M.; Chen, Q. B.; Guo, S.
The structure of 133La has been investigated using the 116Cd( 22Ne,4pn) reaction and the Gammasphere array. Three new bands of quadrupole transitions and one band of dipole transitions are identified and the previously reported level scheme is revised and extended to higher spins. The observed structures are discussed using the cranked Nilsson-Strutinsky formalism, covariant density functional theory, and the particle-rotor model. Triaxial configurations are assigned to all observed bands. For the high-spin bands it is found that rotations around different axes can occur, depending on the configuration. The orientation of the angular momenta of the core and of themore » active particles is investigated, suggesting chiral rotation for two nearly degenerate dipole bands and magnetic rotation for one dipole band. As a result, it is shown that the h 11/2 neutron holes present in the configuration of the nearly degenerate dipole bands have significant angular momentum components not only along the long axis but also along the short axis, contributing to the balance of the angular momentum components along the short and long axes and thus giving rise to a chiral geometry.« less
Seven-quasiparticle bands in Ce139
NASA Astrophysics Data System (ADS)
Chanda, Somen; Bhattacharjee, Tumpa; Bhattacharyya, Sarmishtha; Mukherjee, Anjali; Basu, Swapan Kumar; Ragnarsson, I.; Bhowmik, R. K.; Muralithar, S.; Singh, R. P.; Ghugre, S. S.; Pramanik, U. Datta
2009-05-01
The high spin states in the Ce139 nucleus have been studied by in-beam γ-spectroscopic techniques using the reaction Te130(C12,3n)Ce139 at Ebeam=65 MeV. A gamma detector array, consisting of five Compton-suppressed Clover detectors was used for coincidence measurements. 15 new levels have been proposed and 28 new γ transitions have been assigned to Ce139 on the basis of γγ coincidence data. The level scheme of Ce139 has been extended above the known 70 ns (19)/(2)- isomer up to ~6.1 MeV in excitation energy and (35)/(2)ℏ in spin. The spin-parity assignments for most of the newly proposed levels have been made using the deduced Directional Correlation from Oriented states of nuclei (DCO ratio) and the Polarization Directional Correlation from Oriented states (PDCO ratio) for the de-exciting transitions. The observed level structure has been compared with a large basis shell model calculation and also with the predictions from cranked Nilsson-Strutinsky (CNS) calculations. A general consistency has been observed between these two different theoretical approaches.
Evidence of nontermination of collective rotation near the maximum angular momentum in Rb75
NASA Astrophysics Data System (ADS)
Davies, P. J.; Afanasjev, A. V.; Wadsworth, R.; Andreoiu, C.; Austin, R. A. E.; Carpenter, M. P.; Dashdorj, D.; Finlay, P.; Freeman, S. J.; Garrett, P. E.; Görgen, A.; Greene, J.; Grinyer, G. F.; Hyland, B.; Jenkins, D. G.; Johnston-Theasby, F. L.; Joshi, P.; Macchiavelli, A. O.; Moore, F.; Mukherjee, G.; Phillips, A. A.; Reviol, W.; Sarantites, D.; Schumaker, M. A.; Seweryniak, D.; Smith, M. B.; Svensson, C. E.; Valiente-Dobon, J. J.; Ward, D.
2010-12-01
Two of the four known rotational bands in Rb75 were studied via the Ca40(Ca40,αp)Rb75 reaction at a beam energy of 165 MeV. Transitions were observed up to the maximum spin Imax of the assigned configuration in one case and one-transition short of Imax in the other. Lifetimes were determined using the residual Doppler shift attenuation method. The deduced transition quadrupole moments show a small decrease with increasing spin, but remain large at the highest spins. The results obtained are in good agreement with cranked Nilsson-Strutinsky calculations, which indicate that these rotational bands do not terminate, but remain collective at Imax.
N=151Pu,Cm and Cf nuclei under rotational stress: Role of higher-order deformations
Hota, S. S.; Chowdhury, P.; Khoo, T. L.; ...
2014-10-18
The fast-rotating N=151 isotones 245Pu, 247Cm and 249Cf have been studied through inelastic excitation and transfer reactions with radioactive targets. While all have a ground-state band built on a νj 15/2[734]9/2 - Nilsson configuration, new excited bands have also been observed in each isotone. These odd-N excited bands allow a comparison of the alignment behavior for two different configurations, where the νj 15/2 alignment is either blocked or allowed. The effect of higher order deformations is explored through cranking calculations, which help clarify the elusive nature of νj 15/2 alignments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baguet, A.; Pope, Christopher N.; Samtleben, H.
We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less
Zhigalev, O N
2010-01-01
The genetic structure of populations of four helminth species from moor frog Rana arvalis, in comparison with the population-genetic structure of the host, has been studied with the gel-electrophoresis method. As compared with the host, parasites are characterized by more distinct deviation from the balance of genotypic frequencies and higher level of interpopulation genetic differences. The genetic variability indices in the three of four frog helminthes examined are lower than those in the host. Moreover, these indices are lower than the average indices typical of free-living invertebrates; this fact contradicts the opinion on polyhostality of these helminthes and their wide distribution.
New method to assess the water vapour permeance of wound coverings.
Jonkman, M F; Molenaar, I; Nieuwenhuis, P; Bruin, P; Pennings, A J
1988-05-01
A new method for assessing the permeability to water vapour of wound coverings is presented, using the evaporimeter developed by Nilsson. This new method combines the water vapour transmission rate (WVTR) and the vapour pressure difference across a wound covering in one absolute measure: the water vapour permeance (WVP). The WVP of a wound covering is the steady flow (g) of water vapour per unit (m2) area of surface in unit (h) time induced by unit (kPa) vapour pressure difference, g.m-2.h-1.kPa-1. Since the WVP of a wound covering is a more accurate measure for the permeability than the WVTR is, it facilitates the prediction of the water exchange of a wound covering in clinical situations.
High-spin terminating states in the N = 88 Ho 155 and Er 156 isotones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rees, J. M.; Paul, E. S.; Simpson, J.
2015-05-01
The Sn-124(Cl-37, 6n gamma) fusion-evaporation reaction at a bombarding energy of 180 MeV has been used to significantly extend the excitation level scheme of Ho-155(67)88. The collective rotational behavior of this nucleus breaks down above spin I similar to 30 and a fully aligned noncollective (band terminating) state has been identified at I-pi = 79/2(-). Comparison with cranked Nilsson-Strutinsky calculations also provides evidence for core-excited noncollective states at I-pi = 87/2(-) and (89/2(+)) involving particle-hole excitations across the Z = 64 shell gap. A similar core-excited state in Er-156(68)88 at I-pi = (46(+)) is also presented.
Consistent Pauli reduction on group manifolds
Baguet, A.; Pope, Christopher N.; Samtleben, H.
2016-01-01
We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less
Description of rotating N=Z nuclei in terms of isovector pairing
NASA Astrophysics Data System (ADS)
Afanasjev, A. V.; Frauendorf, S.
2005-06-01
A systematic investigation of the rotating N=Z even-even nuclei in the mass A=68-80 region has been performed within the frameworks of the cranked relativistic mean field, cranked relativistic Hartree-Bogoliubov theories, and cranked Nilsson-Strutinsky approach. Most of the experimental data are well accounted for in the calculations. The present study suggests the presence of strong isovector np pair field at low spin, whose strength is defined by the isospin symmetry. At high spin, the isovector pair field is destroyed and the data are well described by the calculations assuming zero pairing. No clear evidence for the existence of the isoscalar t=0 np pairing has been obtained in the present investigation performed at the mean field level.
Triaxial-band structures, chirality, and magnetic rotation in La 133
Petrache, C. M.; Chen, Q. B.; Guo, S.; ...
2016-12-05
The structure of 133La has been investigated using the 116Cd( 22Ne,4pn) reaction and the Gammasphere array. Three new bands of quadrupole transitions and one band of dipole transitions are identified and the previously reported level scheme is revised and extended to higher spins. The observed structures are discussed using the cranked Nilsson-Strutinsky formalism, covariant density functional theory, and the particle-rotor model. Triaxial configurations are assigned to all observed bands. For the high-spin bands it is found that rotations around different axes can occur, depending on the configuration. The orientation of the angular momenta of the core and of themore » active particles is investigated, suggesting chiral rotation for two nearly degenerate dipole bands and magnetic rotation for one dipole band. As a result, it is shown that the h 11/2 neutron holes present in the configuration of the nearly degenerate dipole bands have significant angular momentum components not only along the long axis but also along the short axis, contributing to the balance of the angular momentum components along the short and long axes and thus giving rise to a chiral geometry.« less
Seven-quasiparticle bands in {sup 139}Ce
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanda, Somen; Bhattacharjee, Tumpa; Bhattacharyya, Sarmishtha
2009-05-15
The high spin states in the {sup 139}Ce nucleus have been studied by in-beam {gamma}-spectroscopic techniques using the reaction {sup 130}Te({sup 12}C,3n){sup 139}Ce at E{sub beam}=65 MeV. A gamma detector array, consisting of five Compton-suppressed Clover detectors was used for coincidence measurements. 15 new levels have been proposed and 28 new {gamma} transitions have been assigned to {sup 139}Ce on the basis of {gamma}{gamma} coincidence data. The level scheme of {sup 139}Ce has been extended above the known 70 ns (19/2){sup -} isomer up to {approx}6.1 MeV in excitation energy and (35/2)({Dirac_h}/2{pi}) in spin. The spin-parity assignments for most ofmore » the newly proposed levels have been made using the deduced Directional Correlation from Oriented states of nuclei (DCO ratio) and the Polarization Directional Correlation from Oriented states (PDCO ratio) for the de-exciting transitions. The observed level structure has been compared with a large basis shell model calculation and also with the predictions from cranked Nilsson-Strutinsky (CNS) calculations. A general consistency has been observed between these two different theoretical approaches.« less
Fission barriers at the end of the chart of the nuclides
NASA Astrophysics Data System (ADS)
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; Iwamoto, Akira; Mumpower, Matthew
2015-02-01
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤A ≤330 . The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop model with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than 5 000 000 different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ɛ ) and the spherical-harmonic (β ) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ɛ ,γ ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about 1 MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β -delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. These studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.
NASA Astrophysics Data System (ADS)
West, G.; O'Regan, M.; Jakobsson, M.; Nilsson, A.; Pearce, C.; Snowball, I.; Wiers, S.
2017-12-01
The lack of high-temporal resolution and well-dated palaeomagnetic records from the Arctic Ocean hinders our understanding of geomagnetic field behaviour in the region, and limits the applicability of these records in the development of accurate age models for Arctic Ocean sediments. We present a palaeomagnetic secular variation (PSV) record from a sediment core recovered from the Chukchi Sea, Arctic Ocean during the SWERUS-C3 Leg 2 Expedition. The 8.24-metre-long core was collected at 57 m water depth in the Herald Canyon (72.52° N 175.32° W), and extends to 4200 years BP based on 14 AMS 14C dates and a tephra layer associated with the 3.6 cal ka BP Aniakchak eruption. Palaeomagnetic measurements and magnetic analyses of discrete samples reveal stable characteristic remanent magnetisation directions, and a magnetic mineralogy dominated by magnetite. Centennial to millennial scale declination and inclination features, which correlate well to other Western Arctic records, can be readily identified. The relative palaeointensity record of the core matches well with spherical harmonic field model outputs of pfm9k (Nilsson et al., 2014) and CALS10k.2 (Constable et al. 2016) for the site location. Supported by a robust chronology, the presented high-resolution PSV record can potentially play a key role in constructing a well-dated master chronology for the region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarman, Kristin H.; Wahl, Karen L.
The concept of rapid microorganism identification using matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) dates back to the mid-1990’s. Prior to 1998, researchers relied on visual inspection in an effort to demonstrate feasibility of MALDI-MS for bacterial identification (Holland, Wilkes et al. 1996), (Krishnamurthy and Ross 1996), (Claydon, Davey et al. 1996). In general, researchers in these early studies visually compared the biomarker intensity profiles between different organisms and between replicates of the same organism to show that MALDI signatures are unique and reproducible. Manual tabulation and comparison of potential biomarker mass values observed for different organisms was used by numerousmore » researchers to qualitatively characterize microorganisms using MALDI-MS spectra (e.g. (Lynn, Chung et al. 1999), (Winkler, Uher et al. 1999), (Ryzhov, Hathout et al. 2000), (Nilsson 1999)).« less
Bioreactors for Tissue Engineering of Cartilage
NASA Astrophysics Data System (ADS)
Concaro, S.; Gustavson, F.; Gatenholm, P.
The cartilage regenerative medicine field has evolved during the last decades. The first-generation technology, autologous chondrocyte transplantation (ACT) involved the transplantation of in vitro expanded chondrocytes to cartilage defects. The second generation involves the seeding of chondrocytes in a three-dimensional scaffold. The technique has several potential advantages such as the ability of arthroscopic implantation, in vitro pre-differentiation of cells and implant stability among others (Brittberg M, Lindahl A, Nilsson A, Ohlsson C, Isaksson O, Peterson L, N Engl J Med 331(14):889-895, 1994; Henderson I, Francisco R, Oakes B, Cameron J, Knee 12(3):209-216, 2005; Peterson L, Minas T, Brittberg M, Nilsson A, Sjogren-Jansson E, Lindahl A, Clin Orthop (374):212-234, 2000; Nagel-Heyer S, Goepfert C, Feyerabend F, Petersen JP, Adamietz P, Meenen NM, et al. Bioprocess Biosyst Eng 27(4):273-280, 2005; Portner R, Nagel-Heyer S, Goepfert C, Adamietz P, Meenen NM, J Biosci Bioeng 100(3):235-245, 2005; Nagel-Heyer S, Goepfert C, Adamietz P, Meenen NM, Portner R, J Biotechnol 121(4):486-497, 2006; Heyland J, Wiegandt K, Goepfert C, Nagel-Heyer S, Ilinich E, Schumacher U, et al. Biotechnol Lett 28(20):1641-1648, 2006). The nutritional requirements of cells that are synthesizing extra-cellular matrix increase along the differentiation process. The mass transfer must be increased according to the tissue properties. Bioreactors represent an attractive tool to accelerate the biochemical and mechanical properties of the engineered tissues providing adequate mass transfer and physical stimuli. Different reactor systems have been [5] developed during the last decades based on different physical stimulation concepts. Static and dynamic compression, confined and nonconfined compression-based reactors have been described in this review. Perfusion systems represent an attractive way of culturing constructs under dynamic conditions. Several groups showed increased matrix production using confined and unconfined systems. Development of automatic culture systems and noninvasive monitoring of matrix production will take place during the next few years in order to improve the cost affectivity of tissue-engineered products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less
Nuclear Structure in China 2010
NASA Astrophysics Data System (ADS)
Bai, Hong-Bo; Meng, Jie; Zhao, En-Guang; Zhou, Shan-Gui
2011-08-01
Personal view on nuclear physics research / Jie Meng -- High-spin level structures in [symbol]Zr / X. P. Cao ... [et al.] -- Constraining the symmetry energy from the neutron skin thickness of tin isotopes / Lie-Wen Chen ... [et al.] -- Wobbling rotation in atomic nuclei / Y. S. Chen and Zao-Chun Gao -- The mixing of scalar mesons and the possible nonstrange dibaryons / L. R. Dai ... [et al.] -- Net baryon productions and gluon saturation in the SPS, RHIC and LHC energy regions / Sheng-Qin Feng -- Production of heavy isotopes with collisions between two actinide nuclides / Z. Q. Feng ... [et al.] -- The projected configuration interaction method / Zao-Chun Gao and Yong-Shou Chen -- Applications of Nilsson mean-field plus extended pairing model to rare-earth nuclei / Xin Guan ... [et al.] -- Complex scaling method and the resonant states / Jian-You Guo ... [et al.] -- Probing the equation of state by deep sub-barrier fusion reactions / Hong-Jun Hao and Jun-Long Tian -- Doublet structure study in A[symbol]105 mass region / C. Y. He ... [et al.] -- Rotational bands in transfermium nuclei / X. T. He -- Shape coexistence and shape evolution [symbol]Yb / H. Hua ... [et al.] -- Multistep shell model method in the complex energy plane / R. J. Liotta -- The evolution of protoneutron stars with kaon condensate / Ang Li -- High spin structures in the [symbol]Lu nucleus / Li Cong-Bo ... [et al.] -- Nuclear stopping and equation of state / QingFeng Li and Ying Yuan -- Covariant description of the low-lying states in neutron-deficient Kr isotopes / Z. X. Li ... [et al.] -- Isospin corrections for superallowed [symbol] transitions / HaoZhao Liang ... [et al.] -- The positive-parity band structures in [symbol]Ag / C. Liu ... [et al.] -- New band structures in odd-odd [symbol]I and [symbol]I / Liu GongYe ... [et al.] -- The sd-pair shell model and interacting boson model / Yan-An Luo ... [et al.] -- Cross-section distributions of fragments in the calcium isotopes projectile fragmentation at the intermediate energy / C. W. Ma ... [et al.].Systematic study of spin assignment and dynamic moment of inertia of high-j intruder band in [symbol]In / K. Y. Ma ... [et al.] -- Signals of diproton emission from the three-body breakup channel of [symbol]Al and [symbol]Mg / Ma Yu-Gang ... [et al.] -- Uncertainties of Th/Eu and Th/Hf chronometers from nucleus masses / Z. M. Niu ... [et al.] -- The chiral doublet bands with [symbol] configuration in A[symbol]100 mass region / B. Qi ... [et al.] -- [symbol] formation probabilities in nuclei and pairing collectivity / Chong Qi -- A theoretical prospective on triggered gamma emission from [symbol]Hf[symbol] isomer / ShuiFa Shen ... [et al.] -- Study of nuclear giant resonances using a Fermi-liquid method / Bao-Xi Sun -- Rotational bands in doubly odd [symbol]Sb / D. P. Sun ... [et al.] -- The study of the neutron N=90 nuclei / W. X. Teng ... [et al.] -- Dynamical modes and mechanisms in ternary reaction of [symbol]Au+[symbol]Au / Jun-Long Tian ... [et al.] -- Dynamical study of X(3872) as a D[symbol] molecular state / B. Wang ... [et al.] -- Super-heavy stability island with a semi-empirical nuclear mass formula / N. Wang ... [et al.] -- Pseudospin partner bands in [symbol]Sb / S. Y. Wang ... [et al.] -- Study of elastic resonance scattering at CIAE / Y. B. Wang ... [et al.] -- Systematic study of survival probability of excited superheavy nuclei / C. J. Xia ... [et al.] -- Angular momentum projection of the Nilsson mean-field plus nearest-orbit pairing interaction model / Ming-Xia Xie ... [et al.] -- Possible shape coexistence for [symbol]Sm in a reflection-asymmetric relativistic mean-field approach / W. Zhang ... [et al.] -- Nuclear pairing reduction due to rotation and blocking / Zhen-Hua Zhang -- Nucleon pair approximation of the shell model: a review and perspective / Y. M. Zhao ... [et al.] -- Band structures in doubly odd [symbol]I / Y. Zheng ... [et al.] -- Lifetimes of high spin states in [symbol]Ag / Y. Zheng ... [et al.] -- Effect of tensor interaction on the shell structure of superheavy nuclei / Xian-Rong Zhou ... [et al.].
Photodisintegration cross section of the reaction 4He(γ,n)3He at the giant dipole resonance peak
NASA Astrophysics Data System (ADS)
Tornow, W.; Kelley, J. H.; Raut, R.; Rusev, G.; Tonchev, A. P.; Ahmed, M. W.; Crowell, A. S.; Stave, S. C.
2012-06-01
The photodisintegration cross section of 4He into a neutron and helion was measured at incident photon energies of 27.0, 27.5, and 28.0 MeV. A high-pressure 4He-Xe gas scintillator served as target and detector while a pure Xe gas scintillator was used for background measurements. A NaI detector in combination with the standard HIγS scintillator paddle system was employed for absolute photon-flux determination. Our data are in good agreement with the theoretical prediction of the Trento group and the recent data of Nilsson [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.75.014007 75, 014007 (2007)] but deviate considerably from the high-precision data of Shima [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.72.044004 72, 044004 (2005)].
Single-particle and collective motion in unbound deformed 39Mg
NASA Astrophysics Data System (ADS)
Fossez, K.; Rotureau, J.; Michel, N.; Liu, Quan; Nazarewicz, W.
2016-11-01
Background: Deformed neutron-rich magnesium isotopes constitute a fascinating territory where the interplay between collective rotation and single-particle motion is strongly affected by the neutron continuum. The unbound f p -shell nucleus 39Mg is an ideal candidate to study this interplay. Purpose: In this work, we predict the properties of low-lying resonant states of 39Mg, using a suite of realistic theoretical approaches rooted in the open quantum system framework. Method: To describe the spectrum and decay modes of 39Mg we use the conventional shell model, Gamow shell model, resonating group method, density matrix renormalization group method, and the nonadiabatic particle-plus-rotor model formulated in the Berggren basis. Results: The unbound ground state of 39Mg is predicted to be either a Jπ=7/2 - state or a 3/2 - state. A narrow Jπ=7/2 - ground-state candidate exhibits a resonant structure reminiscent of that of its one-neutron halo neighbor 37Mg, which is dominated by the f7 /2 partial wave at short distances and a p3 /2 component at large distances. A Jπ=3/2 - ground-state candidate is favored by the large deformation of the system. It can be associated with the 1/2 -[321 ] Nilsson orbital dominated by the ℓ =1 wave; hence its predicted width is large. The excited Jπ=1/2 - and 5 /2- states are expected to be broad resonances, while the Jπ=9/2 - and 11/2 - members of the ground-state rotational band are predicted to have very small neutron decay widths. Conclusion: We demonstrate that the subtle interplay between deformation, shell structure, and continuum coupling can result in a variety of excitations in an unbound nucleus just outside the neutron drip line.
In-beam spectroscopy of medium- and high-spin states in Ce 133
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayangeakaa, A. D.; Garg, U.; Petrache, C. M.
2016-05-01
Medium and high-spin states in Ce-133 were investigated using the Cd-116(Ne-22, 5n) reaction and the Gammasphere array. The level scheme was extended up to an excitation energy of similar to 22.8 MeV and spin 93/2 (h) over bar. Eleven bands of quadrupole transitions and two new dipole bands are identified. The connections to low-lying states of the previously known, high-spin triaxial bands were firmly established, thus fixing the excitation energy and, in many cases, the spin parity of the levels. Based on comparisons with cranked Nilsson-Strutinsky calculations and tilted axis cranking covariant density functional theory, it is shown that allmore » observed bands are characterized by pronounced triaxiality. Competing multiquasiparticle configurations are found to contribute to a rich variety of collective phenomena in this nucleus.« less
NASA Astrophysics Data System (ADS)
Patel, Z.; Walker, P. M.; Podolyák, Zs.; Regan, P. H.; Berry, T. A.; Söderström, P.-A.; Watanabe, H.; Ideguchi, E.; Simpson, G. S.; Nishimura, S.; Wu, Q.; Xu, F. R.; Browne, F.; Doornenbal, P.; Lorusso, G.; Rice, S.; Sinclair, L.; Sumikama, T.; Wu, J.; Xu, Z. Y.; Aoi, N.; Baba, H.; Bello Garrote, F. L.; Benzoni, G.; Daido, R.; Dombrádi, Zs.; Fang, Y.; Fukuda, N.; Gey, G.; Go, S.; Gottardo, A.; Inabe, N.; Isobe, T.; Kameda, D.; Kobayashi, K.; Kobayashi, M.; Komatsubara, T.; Kojouharov, I.; Kubo, T.; Kurz, N.; Kuti, I.; Li, Z.; Matsushita, M.; Michimasa, S.; Moon, C.-B.; Nishibata, H.; Nishizuka, I.; Odahara, A.; Şahin, E.; Sakurai, H.; Schaffner, H.; Suzuki, H.; Takeda, H.; Tanaka, M.; Taprogge, J.; Vajta, Zs.; Yagi, A.; Yokoyama, R.
2017-09-01
Excited states have been studied in 159Sm, 161Sm, 162Sm (Z =62 ), 163Eu (Z =63 ), and 164Gd (Z =64 ), populated by isomeric decay following 238U projectile fission at RIBF, RIKEN. The isomer half-lives range from 50 ns to 2.6 μ s . In comparison with other published data, revised interpretations are proposed for 159Sm and 163Eu. The first data for excited states in 161Sm are presented, where a 2.6-μ s isomer is assigned a three-quasiparticle, Kπ=17 /2- structure. The interpretation is supported by multi-quasiparticle Nilsson-BCS calculations, including the blocking of pairing correlations. A consistent set of reduced E 1 hindrance factors is obtained. Limited evidence is also reported for isomeric decay in 163Sm, 164Eu, and 165Eu.
Simple Interpretation of Proton-Neutron Interactions in Rare Earth Nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oktem, Y.; Cakirli, R. B.; Wright Nuclear Structure Laboratory, Yale University, New Haven, CT 06520
2007-04-23
Empirical values of the average interactions of the last two protons and last two neutrons, {delta}Vpn, which can be obtained from double differences of binding energies, provide significant information about nuclear structure. Studies of {delta}Vpn showed striking behavior across major shell gaps and the relation of proton-neutron (p-n) interaction strengths to the increasing collectivity and onset of deformation in nuclei. Here we focus on the strong regularity at the {delta}Vpn values in A{approx}150-180 mass region. Experimentally, for each nucleus, the valence p-n interaction strengths increase systematically against the neutron number and it decreases for the observed last neutron number. Thesemore » experimental results give almost nearly perfect parallel trajectories. A microscopic interpretation with a zero range {delta}-interaction in a Nilsson basis gives reasonable agreement for Er-W but more significant discrepancies appear for Gd and Dy.« less
Fission barriers at the end of the chart of the nuclides
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; ...
2015-02-12
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less
Evolution of the ATLAS PanDA Production and Distributed Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maeno, T.; De, K.; Wenaus, T.
2012-12-13
Evolution of the ATLAS PanDA Production and Distributed Analysis System T Maeno1,5, K De2, T Wenaus1, P Nilsson2, R Walker3, A Stradling2, V Fine1, M Potekhin1, S Panitkin1 and G Compostella4 Published under licence by IOP Publishing Ltd Journal of Physics: Conference Series, Volume 396, Part 3 Article PDF References Citations Metrics 101 Total downloads Cited by 8 articles Turn on MathJax Share this article Article information Abstract The PanDA (Production and Distributed Analysis) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at LHC data processing scale. PanDAmore » has performed well with high reliability and robustness during the two years of LHC data-taking, while being actively evolved to meet the rapidly changing requirements for analysis use cases. We will present an overview of system evolution including automatic rebrokerage and reattempt for analysis jobs, adaptation for the CernVM File System, support for the multi-cloud model through which Tier-2 sites act as members of multiple clouds, pledged resource management and preferential brokerage, and monitoring improvements. We will also describe results from the analysis of two years of PanDA usage statistics, current issues, and plans for the future.« less
Configuration-constrained cranking Hartree-Fock pairing calculations for sidebands of nuclei
NASA Astrophysics Data System (ADS)
Liang, W. Y.; Jiao, C. F.; Wu, Q.; Fu, X. M.; Xu, F. R.
2015-12-01
Background: Nuclear collective rotations have been successfully described by the cranking Hartree-Fock-Bogoliubov (HFB) model. However, for rotational sidebands which are built on intrinsic excited configurations, it may not be easy to find converged cranking HFB solutions. The nonconservation of the particle number in the BCS pairing is another shortcoming. To improve the pairing treatment, a particle-number-conserving (PNC) pairing method was suggested. But the existing PNC calculations were performed within a phenomenological one-body potential (e.g., Nilsson or Woods-Saxon) in which one has to deal the double-counting problem. Purpose: The present work aims at an improved description of nuclear rotations, particularly for the rotations of excited configurations, i.e., sidebands. Methods: We developed a configuration-constrained cranking Skyrme Hartree-Fock (SHF) calculation with the pairing correlation treated by the PNC method. The PNC pairing takes the philosophy of the shell model which diagonalizes the Hamiltonian in a truncated model space. The cranked deformed SHF basis provides a small but efficient model space for the PNC diagonalization. Results: We have applied the present method to the calculations of collective rotations of hafnium isotopes for both ground-state bands and sidebands, reproducing well experimental observations. The first up-bendings observed in the yrast bands of the hafnium isotopes are reproduced, and the second up-bendings are predicted. Calculations for rotational bands built on broken-pair excited configurations agree well with experimental data. The band-mixing between two Kπ=6+ bands observed in 176Hf and the K purity of the 178Hf rotational state built on the famous 31 yr Kπ=16+ isomer are discussed. Conclusions: The developed configuration-constrained cranking calculation has been proved to be a powerful tool to describe both the yrast bands and sidebands of deformed nuclei. The analyses of rotational moments of inertia help to understand the structures of nuclei, including rotational alignments, configurations, and competitions between collective and single-particle excitations.
Triplet correlation functions in liquid water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in; Singh, Murari
Triplet correlations have been shown to play a crucial role in the transformation of simple liquids to anomalous tetrahedral fluids [M. Singh, D. Dhabal, A. H. Nguyen, V. Molinero, and C. Chakravarty, Phys. Rev. Lett. 112, 147801 (2014)]. Here we examine triplet correlation functions for water, arguably the most important tetrahedral liquid, under ambient conditions, using configurational ensembles derived from molecular dynamics (MD) simulations and reverse Monte Carlo (RMC) datasets fitted to experimental scattering data. Four different RMC data sets with widely varying hydrogen-bond topologies fitted to neutron and x-ray scattering data are considered [K. T. Wikfeldt, M. Leetmaa, M.more » P. Ljungberg, A. Nilsson, and L. G. M. Pettersson, J. Phys. Chem. B 113, 6246 (2009)]. Molecular dynamics simulations are performed for two rigid-body effective pair potentials (SPC/E and TIP4P/2005) and the monatomic water (mW) model. Triplet correlation functions are compared with other structural measures for tetrahedrality, such as the O–O–O angular distribution function and the local tetrahedral order distributions. In contrast to the pair correlation functions, which are identical for all the RMC ensembles, the O–O–O triplet correlation function can discriminate between ensembles with different degrees of tetrahedral network formation with the maximally symmetric, tetrahedral SYM dataset displaying distinct signatures of tetrahedrality similar to those obtained from atomistic simulations of the SPC/E model. Triplet correlations from the RMC datasets conform closely to the Kirkwood superposition approximation, while those from MD simulations show deviations within the first two neighbour shells. The possibilities for experimental estimation of triplet correlations of water and other tetrahedral liquids are discussed.« less
Monte Carlo Simulations for VLBI2010
NASA Astrophysics Data System (ADS)
Wresnik, J.; Böhm, J.; Schuh, H.
2007-07-01
Monte Carlo simulations are carried out at the Institute of Geodesy and Geophysics (IGG), Vienna, and at Goddard Space Flight Center (GSFC), Greenbelt (USA), with the goal to design a new geodetic Very Long Baseline Interferometry (VLBI) system. Influences of the schedule, the network geometry and the main stochastic processes on the geodetic results are investigated. Therefore schedules are prepared with the software package SKED (Vandenberg 1999), and different strategies are applied to produce temporally very dense schedules which are compared in terms of baseline length repeatabilities. For the simulation of VLBI observations a Monte Carlo Simulator was set up which creates artificial observations by randomly simulating wet zenith delay and clock values as well as additive white noise representing the antenna errors. For the simulation at IGG the VLBI analysis software OCCAM (Titov et al. 2004) was adapted. Random walk processes with power spectrum densities of 0.7 and 0.1 psec2/sec are used for the simulation of wet zenith delays. The clocks are simulated with Allan Standard Deviations of 1*10^-14 @ 50 min and 2*10^-15 @ 15 min and three levels of white noise, 4 psec, 8 psec and, 16 psec, are added to the artificial observations. The variations of the power spectrum densities of the clocks and wet zenith delays, and the application of different white noise levels show clearly that the wet delay is the critical factor for the improvement of the geodetic VLBI system. At GSFC the software CalcSolve is used for the VLBI analysis, therefore a comparison between the software packages OCCAM and CalcSolve was done with simulated data. For further simulations the wet zenith delay was modeled by a turbulence model. This data was provided by Nilsson T. and was added to the simulation work. Different schedules have been run.
NASA Astrophysics Data System (ADS)
Moon, B.; Moon, C.-B.; Odahara, A.; Lozeva, R.; Söderström, P.-A.; Browne, F.; Yuan, C.; Yagi, A.; Hong, B.; Jung, H. S.; Lee, P.; Lee, C. S.; Nishimura, S.; Doornenbal, P.; Lorusso, G.; Sumikama, T.; Watanabe, H.; Kojouharov, I.; Isobe, T.; Baba, H.; Sakurai, H.; Daido, R.; Fang, Y.; Nishibata, H.; Patel, Z.; Rice, S.; Sinclair, L.; Wu, J.; Xu, Z. Y.; Yokoyama, R.; Kubo, T.; Inabe, N.; Suzuki, H.; Fukuda, N.; Kameda, D.; Takeda, H.; Ahn, D. S.; Shimizu, Y.; Murai, D.; Bello Garrote, F. L.; Daugas, J. M.; Didierjean, F.; Ideguchi, E.; Ishigaki, T.; Morimoto, S.; Niikura, M.; Nishizuka, I.; Komatsubara, T.; Kwon, Y. K.; Tshoo, K.
2017-07-01
We report for the first time the β -decay scheme of 140Te (Z =52 ) to 140I (Z =53 ), with a specific focus on the Gamow-Teller strength along N =87 isotones. These results were obtained in an experiment performed at the Radioactive Ion Beam Factory (RIBF), RIKEN, where the parent nuclide, 140Te, was produced through the in-flight fission of a 238U beam at 345 MeV per nucleon impinging on a 9Be target. Based on data from the high-efficiency γ -ray spectrometer, EUROBALL-RIKEN Cluster Array (EURICA), we constructed a decay scheme of 140I. The half-life of 140Te has been determined to be 350(5) ms. A level at 926 keV has been assigned as a (1+) state based on the logf t value of 4.89(6). This (1+) state, commonly observed in odd-odd nuclei, can be interpreted in terms of the π h11 /2ν h9 /2 configuration formed by the Gamow-Teller transition between a neutron in the h9 /2 orbital and a proton in the h11 /2 orbital. We observe a sharp contrast to this type of β -decay branching to the lower-lying 1+ states between 140I and 136I, where we see a large reduction as the number of neutrons increases. This is in contrast to the prediction by large-scale shell model calculations. To investigate this type of the suppression, results of the Nilsson model calculations will be discussed. Along the isotones with N =87 , we discuss a characteristic feature of the Gamow-Teller distributions at 1+ states with respect to the isospin difference.
New species and host plants of Anastrepha (Diptera: Tephritidae) primarily from Peru and Bolivia.
Norrbom, Allen L; Rodriguez, Erick J; Steck, Gary J; Sutton, Bruce A; Nolazco, Norma
2015-11-16
Twenty-eight new species of Anastrepha are described and illustrated: A. acca (Bolivia, Peru), A. adami (Peru), A. amplidentata (Bolivia, Peru), A. annonae (Peru), A. breviapex (Peru), A. caballeroi (Peru), A. camba (Bolivia, Peru), A. cicra (Bolivia, Peru), A. disjuncta (Peru), A. durantae (Peru), A. echaratiensis (Peru), A. eminens (Peru), A. ericki (Peru), A. gonzalezi (Bolivia, Peru), A. guevarai (Peru), A. gusi (Peru), A. kimi (Colombia, Peru), A. korytkowskii (Bolivia, Peru), A. latilanceola (Bolivia, Peru), A. melanoptera (Peru), A. mollyae (Bolivia, Peru), A. perezi (Peru), A. psidivora (Peru), A. robynae (Peru), A. rondoniensis (Brazil, Peru), A. tunariensis (Bolivia, Peru), A. villosa (Bolivia), and A. zacharyi (Peru). The following host plant records are reported: A. amplidentata from Spondias mombin L. (Anacardiaceae); A. caballeroi from Quararibea malacocalyx A. Robyns & S. Nilsson (Malvaceae); A. annonae from Annona mucosa Jacq. and Annona sp. (Annonaceae); A. durantae from Duranta peruviana Moldenke (Verbenaceae); and A. psidivora from Psidium guajava L. (Myrtaceae).
New isomer and decay half-life of {sup 115}Ru
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurpeta, J.; Plochocki, A.; Rissanen, J.
2010-12-15
Exotic, neutron-rich nuclei of mass A=115 produced in proton-induced fission of {sup 238}U were extracted using the IGISOL mass separator. The beam of isobars was transferred to the JYFLTRAP Penning trap system for further separation to the isotopic level. Monoisotopic samples of {sup 115}Ru nuclei were used for {gamma}and {beta} coincidence spectroscopy. In {sup 115}Ru we have observed excited levels, including an isomer with a half-life of 76(6) ms and (7/2{sup -}) spin and parity. The first excited 61.7-keV level in {sup 115}Ru with spins and parity (3/2{sup +}) may correspond to an oblate 3/2{sup +}[431] Nilsson orbital. A half-lifemore » of 318(19) ms for the {beta}{sup -} decay of the (1/2{sup +}) ground state in {sup 115}Ru has been firmly established in two independent measurements, a value which is significantly shorter than that previously reported.« less
Levels in 227Ac populated in the 230Th( p, α) reaction
NASA Astrophysics Data System (ADS)
Burke, D. G.; Garrett, P. E.; Qu, Tao
2003-09-01
The 230,232Th(p, α) 227,229Ac reactions were studied using 20 MeV protons and a magnetic spectrograph to analyze the reaction products. Relative populations of levels in 229Ac correlated well with previously published (t, α) results for the same final levels, showing that the similarity of the two reactions observed empirically in the deformed rare earth region extends to actinides. The most strongly populated level in 227Ac is at 639 keV, and is assigned as the 1/2 +[4 0 0] bandhead. The 435 keV level, previously adopted as the 1/2 +[6 6 0] bandhead, also has a significant intensity that is attributed to Δ N=2 mixing between these two K=1/2 proton orbitals. The Δ N=2 matrix element estimated from these data is ˜80 keV, similar to values observed for the same two Nilsson states as neutron orbitals in the dysprosium isotopes.
Plasma 1-carbon metabolites and academic achievement in 15-yr-old adolescents
Nilsson, Torbjörn K.; Hurtig-Wennlöf, Anita; Sjöström, Michael; Herrmann, Wolfgang; Obeid, Rima; Owen, Jennifer R.; Zeisel, Steven
2015-01-01
Academic achievement in adolescents is correlated with 1-carbon metabolism (1-CM), as folate intake is positively related and total plasma homocysteine (tHcy) negatively related to academic success. Because another 1-CM nutrient, choline is essential for fetal neurocognitive development, we hypothesized that choline and betaine could also be positively related to academic achievement in adolescents. In a sample of 15-yr-old children (n = 324), we measured plasma concentrations of homocysteine, choline, and betaine and genotyped them for 2 polymorphisms with effects on 1-CM, methylenetetrahydrofolate reductase (MTHFR) 677C>T, rs1801133, and phosphatidylethanolamine N-methyltransferase (PEMT), rs12325817 (G>C). The sum of school grades in 17 major subjects was used as an outcome measure for academic achievement. Lifestyle and family socioeconomic status (SES) data were obtained from questionnaires. Plasma choline was significantly and positively associated with academic achievement independent of SES factors (paternal education and income, maternal education and income, smoking, school) and of folate intake (P = 0.009, R2 = 0.285). With the addition of the PEMT rs12325817 polymorphism, the association value was only marginally changed. Plasma betaine concentration, tHcy, and the MTHFR 677C>T polymorphism did not affect academic achievement in any tested model involving choline. Dietary intake of choline is marginal in many adolescents and may be a public health concern.—Nilsson, T. K., Hurtig-Wennlöf, A., Sjöström, M., Herrmann, W., Obeid, R., Owen, J. R., Zeisel, S. Plasma 1-carbon metabolites and academic achievement in 15-yr-old adolescents. PMID:26728177
Rönnlund, Michael; Nilsson, Lars-Göran
2009-09-01
The study examined the extent to which time-related gains in cognitive performance, so-called Flynn effects, generalize across sub-factors of episodic memory (recall and recognition) and semantic memory (knowledge and fluency). We conducted time-sequential analyses of data drawn from the Betula prospective cohort study, involving four age-matched samples (35-80 years; N=2996) tested on the same battery of memory tasks on either of four occasions (1989, 1995, 1999, and 2004). The results demonstrate substantial time-related improvements on recall and recognition as well as on fluency and knowledge, with a trend of larger gains on semantic as compared with episodic memory [Rönnlund, M., & Nilsson, L. -G. (2008). The magnitude, generality, and determinants of Flynn effects on forms of declarative memory: Time-sequential analyses of data from a Swedish cohort study. Intelligence], but highly similar gains across the sub-factors. Finally, the association with markers of environmental change was similar, with evidence that historical increases in quantity of schooling was a main driving force behind the gains, both on the episodic and semantic sub-factors. The results obtained are discussed in terms of brain regions involved.
Lennernäs, B; Edgren, M; Nilsson, S
1999-01-01
The purpose of this study was to evaluate the precision of a sensor and to ascertain the maximum distance between the sensor and the magnet, in a magnetic positioning system for external beam radiotherapy using a trained artificial intelligence neural network for position determination. Magnetic positioning for radiotherapy, previously described by Lennernäs and Nilsson, is a functional technique, but it is time consuming. The sensors are large and the distance between the sensor and the magnetic implant is limited to short distances. This paper presents a new technique for positioning, using an artificial intelligence neural network, which was trained to position the magnetic implant with at least 0.5 mm resolution in X and Y dimensions. The possibility of using the system for determination in the Z dimension, that is the distance between the magnet and the sensor, was also investigated. After training, this system positioned the magnet with a mean error of maximum 0.15 mm in all dimensions and up to 13 mm from the sensor. Of 400 test positions, 8 determinations had an error larger than 0.5 mm, maximum 0.55 mm. A position was determined in approximately 0.01 s.
Single-particle and collective excitations in Ni 62
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albers, M.; Zhu, S.; Ayangeakaa, A. D.
In this study, level sequences of rotational character have been observed in several nuclei in the A = 60 mass region. The importance of the deformation-driving πf 7/2 and νg 9/2 orbitals on the onset of nuclear deformation is stressed. A measurement was performed in order to identify collective rotational structures in the relatively neutron-rich 62Ni isotope. Here, the 26Mg( 48Ca,2α4nγ) 62Ni complex reaction at beam energies between 275 and 320 MeV was utilized. Reaction products were identified in mass (A) and charge (Z) with the fragment mass analyzer (FMA) and γ rays were detected with the Gammasphere array. Asmore » a result, two collective bands, built upon states of single-particle character, were identified and sizable deformation was assigned to both sequences based on the measured transitional quadrupole moments, herewith quantifying the deformation at high spin. In conclusion, based on cranked Nilsson-Strutinsky calculations and comparisons with deformed bands in the A = 60 mass region, the two rotational bands are understood as being associated with configurations involving multiple f 7/2 protons and g 9/2 neutrons, driving the nucleus to sizable prolate deformation.« less
Single-particle and collective excitations in Ni 62
Albers, M.; Zhu, S.; Ayangeakaa, A. D.; ...
2016-09-01
In this study, level sequences of rotational character have been observed in several nuclei in the A = 60 mass region. The importance of the deformation-driving πf 7/2 and νg 9/2 orbitals on the onset of nuclear deformation is stressed. A measurement was performed in order to identify collective rotational structures in the relatively neutron-rich 62Ni isotope. Here, the 26Mg( 48Ca,2α4nγ) 62Ni complex reaction at beam energies between 275 and 320 MeV was utilized. Reaction products were identified in mass (A) and charge (Z) with the fragment mass analyzer (FMA) and γ rays were detected with the Gammasphere array. Asmore » a result, two collective bands, built upon states of single-particle character, were identified and sizable deformation was assigned to both sequences based on the measured transitional quadrupole moments, herewith quantifying the deformation at high spin. In conclusion, based on cranked Nilsson-Strutinsky calculations and comparisons with deformed bands in the A = 60 mass region, the two rotational bands are understood as being associated with configurations involving multiple f 7/2 protons and g 9/2 neutrons, driving the nucleus to sizable prolate deformation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashima, K.; Wilkowski, G.M.
1988-03-01
The third in a series of international Leak-Before-Break (LBB) Seminars supported in part by the US Nuclear Regulatory Commission was held at TEPCO Hall in the Tokyo Electric Power Company's (TEPCO) Electric Power Museum on May 14 and 15, 1987. The seminar updated the international policies and supporting research on LBB. Attendees included representatives from regulatory agencies, electric utility representatives, fabricators of nuclear power plants, research organizations, and university professors. Regulatory policy was the subject of presentations by Mr. G. Arlotto (US NRC, USA), Dr. H. Schultz (GRS, W. Germany), Dr. P. Milella (ENEA-DISP, Italy), Dr. C. Faidy, P. Jamet,more » and S. Bhandari (EDF/Septen, CEA/CEN, and Framatome, France), and Mr. T. Fukuzawa (MITI, Japan). Dr. F. Nilsson presented revised nondestructive inspection requirements relative to LBB in Sweden. In addition, several papers on the supporting research programs discussed regulatory policy. Questions following the presentations of the papers focused on the impact of various LBB policies or the impact of research findings. Supporting research programs were reviewed on the first and second day by several participants from the US, Japan, Germany, Canada, Italy, Sweden, England, and France.« less
Shearer, Joseph J.; Wold, Eric A.; Umbaugh, Charles S.; Lichti, Cheryl F.; Nilsson, Carol L.; Figueiredo, Marxa L.
2015-01-01
Background: The tumor microenvironment plays an important role in the progression of cancer by mediating stromal–epithelial paracrine signaling, which can aberrantly modulate cellular proliferation and tumorigenesis. Exposure to environmental toxicants, such as inorganic arsenic (iAs), has also been implicated in the progression of prostate cancer. Objective: The role of iAs exposure in stromal signaling in the tumor microenvironment has been largely unexplored. Our objective was to elucidate molecular mechanisms of iAs-induced changes to stromal signaling by an enriched prostate tumor microenvironment cell population, adipose-derived mesenchymal stem/stromal cells (ASCs). Results: ASC-conditioned media (CM) collected after 1 week of iAs exposure increased prostate cancer cell viability, whereas CM from ASCs that received no iAs exposure decreased cell viability. Cytokine array analysis suggested changes to cytokine signaling associated with iAs exposure. Subsequent proteomic analysis suggested a concentration-dependent alteration to the HMOX1/THBS1/TGFβ signaling pathway by iAs. These results were validated by quantitative reverse transcriptase–polymerase chain reaction (RT-PCR) and Western blotting, confirming a concentration-dependent increase in HMOX1 and a decrease in THBS1 expression in ASC following iAs exposure. Subsequently, we used a TGFβ pathway reporter construct to confirm a decrease in stromal TGFβ signaling in ASC following iAs exposure. Conclusions: Our results suggest a concentration-dependent alteration of stromal signaling: specifically, attenuation of stromal-mediated TGFβ signaling following exposure to iAs. Our results indicate iAs may enhance prostate cancer cell viability through a previously unreported stromal-based mechanism. These findings indicate that the stroma may mediate the effects of iAs in tumor progression, which may have future therapeutic implications. Citation: Shearer JJ, Wold EA, Umbaugh CS, Lichti CF, Nilsson CL, Figueiredo ML. 2016. Inorganic arsenic–related changes in the stromal tumor microenvironment in a prostate cancer cell–conditioned media model. Environ Health Perspect 124:1009–1015; http://dx.doi.org/10.1289/ehp.1510090 PMID:26588813
The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian
2010-01-01
Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.
Wallace, I S; Donald, K; Munro, L A; Murray, W; Pert, C C; Stagg, H; Hall, M; Bain, N
2015-06-01
Viral haemorrhagic septicaemia virus (VHSV) was isolated from five species of wrasse (Labridae) used as biological controls for parasitic sea lice predominantly, Lepeophtheirus salmonis (Krøyer, 1837), on marine Atlantic salmon, Salmo salar L., farms in Shetland. As part of the epidemiological investigation, 1400 wild marine fish were caught and screened in pools of 10 for VHSV using virus isolation. Eleven pools (8%) were confirmed VHSV positive from: grey gurnard, Eutrigla gurnardus L.; Atlantic herring, Clupea harengus L.; Norway pout, Trisopterus esmarkii (Nilsson); plaice, Pleuronectes platessa L.; sprat, Sprattus sprattus L. and whiting, Merlangius merlangus L. The isolation of VHSV from grey gurnard is the first documented report in this species. Nucleic acid sequencing of the partial nucleocapsid (N) and glycoprotein (G) genes was carried out for viral characterization. Sequence analysis confirmed that all wild isolates were genotype III the same as the wrasse and there was a close genetic similarity between the isolates from wild fish and wrasse on the farms. Infection from these local wild marine fish is the most likely source of VHSV isolated from wrasse on the fish farms. © 2014 Crown Copyright. Journal of Fish Diseases © 2014 John Wiley & Sons Ltd.
Nuclear orientation of antimony and bromine isotopes
NASA Astrophysics Data System (ADS)
Barham, Christopher G.
The technique of Low Temperature Nuclear Orientation has been used to study neutron deficient antimony and bromine isotopes. The antimony and bromine isotopes were produced using Daresbury Laboratory's Nuclear Sctructure Facility by the reactions [28]Si([93]Nb) and [28]Si([54]Fe) respectively, both at 150 MeV. Further anisotropy measurements on [72.74m,75]Br at lower temperature have been used to extend previous data. The magnetic moment of [72]Br has been limited to be within the range 0.54mu[N]
Critical temperature for shape transition in hot nuclei within covariant density functional theory
NASA Astrophysics Data System (ADS)
Zhang, W.; Niu, Y. F.
2018-05-01
Prompted by the simple proportional relation between critical temperature for pairing transition and pairing gap at zero temperature, we investigate the relation between critical temperature for shape transition and ground-state deformation by taking even-even Cm-304286 isotopes as examples. The finite-temperature axially deformed covariant density functional theory with BCS pairing correlation is used. Since the Cm isotopes are the newly proposed nuclei with octupole correlations, we studied in detail the free energy surface, the Nilsson single-particle (s.p.) levels, and the components of s.p. levels near the Fermi level in 292Cm. Through this study, the formation of octupole equilibrium is understood by the contribution coming from the octupole driving pairs with Ω [N ,nz,ml] and Ω [N +1 ,nz±3 ,ml] for single-particle levels near the Fermi surfaces as it provides a good manifestation of the octupole correlation. Furthermore, the systematics of deformations, pairing gaps, and the specific heat as functions of temperature for even-even Cm-304286 isotopes are discussed. Similar to the relation between the critical pairing transition temperature and the pairing gap at zero temperature Tc=0.6 Δ (0 ) , a proportional relation between the critical shape transition temperature and the deformation at zero temperature Tc=6.6 β (0 ) is found for both octupole shape transition and quadrupole shape transition for the isotopes considered.
Controls on the Climates of Tidally Locked Terrestrial Planets
NASA Astrophysics Data System (ADS)
Yang, J.; Cowan, N. B.; Abbot, D. S.
2013-12-01
Earth-size planets in the habitable zone of M-dwarf stars may be very common. Due to strong tidal forces, these planets in circulate orbits are expected to be tidally locked, with one hemisphere experiencing perpetual day and the other permanent night. Previous studies on the climates of tidally locked planets were primarily based on complex 3D general circulation models (GCMs). The central question to be answered in this work is: what is the minimum necessary physics needed to understand the climates simulated by GCMs? A two-column model, primarily based on the weak temperature gradient (WTG) approximation (Sobel et al. 2001) and the fixed anvil temperature (FAT) hypothesis (Hartmann and Larson 2002) for the tropical climate of Earth, is developed for understanding the climates of tidally locked planets. This highly idealized model well reproduces fundamental features of the climates obtained in complicated GCMs (Yang et al. 2013), including planetary albedo, longwave cloud forcing, outgoing longwave radiation (OLR), and atmospheric energy transport. This suggests that the WTG approximation and the FAT hypothesis may be good approximations for tidally locked habitable planets, which provides strong constraints on the large-scale circulations, diabatic processes, and cloud behaviour on these planets. Both the simple model and the GCMs predict that (i) convection and planetary albedo on the dayside increase as stellar flux is increased; (ii) longwave cloud radiative forcing increases as stellar flux is increased, due to the cloud top temperature remains nearly constant as the climate changes (FAT hypothesis); (iii) for planets at the inner regions of the habitable zone, the dayside--nightside OLR contrast becomes very weak or even reverses, due to the strong longwave absorption by water vapor and clouds on the dayside; (iv) the dayside--to--nightside atmospheric energy transport (AET) increases as stellar flux is increased, and decreases as oceanic energy transport (OET) is included, although the compensation between AET and OET is incomplete. To summarize, we are able to construct a realistic low-order model for the climate of tidally locked terrestrial planets, including the cloud behavior, using only the two constraints. This bodes well for the interpretation of complex GCMs and future observations of such planets using, for example, the James Webb Space Telescope. Cited papers: [1]. Sobel, A. H., J. Nilsson and L. M. Polvani: The weak temperature gradient approximation and balanced tropical moisture waves, J. Atmos. Sci., 58, 3650-65, 2001. [2]. Hartmann, D. L. and K. Larson, An important constraint on tropical cloud-climate feedback, Geophys. Res. Lett., 29, 1951-54, 2002. [3]. Yang, J., N. B. Cowan and D. S. Abbot: Stabilizing cloud feedback dramatically expands the habitable zone of tidally locked planets, ApJ. Lett., 771, L45, 2013.
Neuropeptides and nitric oxide synthase in the gill and the air-breathing organs of fishes.
Zaccone, Giacomo; Mauceri, Angela; Fasulo, Salvatore
2006-05-01
Anatomical and histochemical studies have demonstrated that the bulk of autonomic neurotransmission in fish gill is attributed to cholinergic and adrenergic mechanisms (Nilsson. 1984. In: Hoar WS, Randall DJ, editors. Fish physiology, Vol. XA. Orlando: Academic Press. p 185-227; Donald. 1998. In: Evans DH, editor. The physiology of fishes, 2nd edition. Boca Raton: CRC Press. p 407-439). In many tissues, blockade of adrenergic and cholinergic transmission results in residual responses to nerve stimulation, which are termed NonAdrenergic, NonCholinergic (NANC). The discovery of nitric oxide (NO) has provided a basis for explaining many examples of NANC transmissions with accumulated physiological and pharmacological data indicating its function as a primary NANC transmitter. Little is known about the NANC neurotransmission, and studies on neuropeptides and NOS (Nitric Oxide Synthase) are very fragmentary in the gill and the air-breathing organs of fishes. Knowledge of the distribution of nerves and effects of perfusing agonists may help to understand the mechanisms of perfusion regulation in the gill (Olson. 2002. J Exp Zool 293:214-231). Air breathing as a mechanism for acquiring oxygen has evolved independently in several groups of fishes, necessitating modifications of the organs responsible for the exchange of gases. Aquatic hypoxia in freshwaters has been probably the more important selective force in the evolution of air breathing in vertebrates. Fishes respire with gills that are complex structures with many different effectors and potential control systems. Autonomic innervation of the gill has received considerable attention. An excellent review on branchial innervation includes Sundin and Nilsson's (2002. J Exp Zool 293:232-248) with an emphasis on the anatomy and basic functioning of afferent and efferent fibers of the branchial nerves. The chapters by Evans (2002. J Exp Zool 293:336-347) and Olson (2002) provide new challenges about a variety of neurocrine, endocrine, paracrine and autocrine signals that modulate gill perfusion and ionic transport. The development of the immunohistochemical techniques has led to a new phase of experimentation and to information mainly related to gills rather than air-breathing organs of fishes. During the last few years, identification of new molecules as autonomic neurotransmitters, monoamines and NO, and of their multiple roles as cotransmitters, has reshaped our knowledge of the mechanisms of autonomic regulation of various functions in the organs of teleosts (Donald, '98).NO acts as neurotransmitter and is widely distributed in the nerves and the neuroepithelial cells of the gill, the nerves of visceral muscles of the lung of polypterids, the vascular endothelial cells in the air sac of Heteropneustes fossilis and the respiratory epithelium in the swimbladder of the catfish Pangasius hypophthalmus. In addition, 5-HT, enkephalins and some neuropeptides, such as VIP and PACAP, seem to be NANC transmitter candidates in the fish gill and polypterid lung. The origin and function of NANC nerves in the lung of air-breathing fishes await investigation. Several mechanisms have developed in the Vertebrates to control the flow of blood to respiratory organs. These mechanisms include a local production of vasoactive substances, a release of endocrine hormones into the circulation and neuronal mechanisms. Air breathers may be expected to have different control mechanisms compared with fully aquatic fishes. Therefore, we need to know the distribution and function of autonomic nerves in the air-breathing organs of the fishes.
Skyrme random-phase-approximation description of lowest Kπ=2γ+ states in axially deformed nuclei
NASA Astrophysics Data System (ADS)
Nesterenko, V. O.; Kartavenko, V. G.; Kleinig, W.; Kvasil, J.; Repko, A.; Jolos, R. V.; Reinhard, P.-G.
2016-03-01
The lowest quadrupole γ -vibrational Kπ=2+ states in axially deformed rare-earth (Nd, Sm, Gd, Dy, Er, Yb, Hf, W) and actinide (U) nuclei are systematically investigated within the separable random-phase-approximation (SRPA) based on the Skyrme functional. The energies Eγ and reduced transition probabilities B (E 2 ) of 2γ+ states are calculated with the Skyrme forces SV-bas and SkM*. The energies of two-quasiparticle configurations forming the SRPA basis are corrected by using the pairing blocking effect. This results in a systematic downshift of Eγ by 0.3-0.5 MeV and thus in a better agreement with the experiment, especially in Sm, Gd, Dy, Hf, and W regions. For other isotopic chains, a noticeable overestimation of Eγ and too weak collectivity of 2γ+ states still persist. It is shown that domains of nuclei with low and high 2γ+ collectivity are related to the structure of the lowest two-quasiparticle states and conservation of the Nilsson selection rules. The description of 2γ+ states with SV-bas and SkM* is similar in light rare-earth nuclei but deviates in heavier nuclei. However SV-bas much better reproduces the quadrupole deformation and energy of the isoscalar giant quadrupole resonance. The accuracy of SRPA is justified by comparison with exact RPA. The calculations suggest that a further development of the self-consistent calculation schemes is needed for a systematic satisfactory description of the 2γ+ states.
NASA Astrophysics Data System (ADS)
Afanasjev, A. V.; Abusara, H.
2018-02-01
The nodal structure of the density distributions of the single-particle states occupied in rod-shaped, hyper- and megadeformed structures of nonrotating and rotating N ˜Z nuclei has been investigated in detail. The single-particle states with the Nilsson quantum numbers of the [N N 0 ]1 /2 (with N from 0 to 5) and [N ,N -1 ,1 ]Ω (with N from 1 to 3 and Ω =1 /2 , 3/2) types are considered. These states are building blocks of extremely deformed shapes in the nuclei with mass numbers A ≤50 . Because of (near) axial symmetry and large elongation of such structures, the wave functions of the single-particle states occupied are dominated by a single basis state in cylindrical basis. This basis state defines the nodal structure of the single-particle density distribution. The nodal structure of the single-particle density distributions allows us to understand in a relatively simple way the necessary conditions for α clusterization and the suppression of the α clusterization with the increase of mass number. It also explains in a natural way the coexistence of ellipsoidal mean-field-type structures and nuclear molecules at similar excitation energies and the features of particle-hole excitations connecting these two types of the structures. Our analysis of the nodal structure of the single-particle density distributions does not support the existence of quantum liquid phase for the deformations and nuclei under study.
NASA Astrophysics Data System (ADS)
Hazreek, Z. A. M.; Rosli, S.; Fauziah, A.; Wijeyesekera, D. C.; Ashraf, M. I. M.; Faizal, T. B. M.; Kamarudin, A. F.; Rais, Y.; Dan, M. F. Md; Azhar, A. T. S.; Hafiz, Z. M.
2018-04-01
The efficiency of civil engineering structure require comprehensive geotechnical data obtained from site investigation. In the past, conventional site investigation was heavily related to drilling techniques thus suffer from several limitations such as time consuming, expensive and limited data collection. Consequently, this study presents determination of soil moisture content using laboratory experimental and field electrical resistivity values (ERV). Field and laboratory electrical resistivity (ER) test were performed using ABEM SAS4000 and Nilsson400 soil resistance meter. Soil sample used for resistivity test was tested for characterization test specifically on particle size distribution and moisture content test according to BS1377 (1990). Field ER data was processed using RES2DINV software while laboratory ER data was analyzed using SPSS and Excel software. Correlation of ERV and moisture content shows some medium relationship due to its r = 0.506. Moreover, coefficient of determination, R2 analyzed has demonstrate that the statistical correlation obtain was very good due to its R2 value of 0.9382. In order to determine soil moisture content based on statistical correlation (w = 110.68ρ-0.347), correction factor, C was established through laboratory and field ERV given as 19.27. Finally, this study has shown that soil basic geotechnical properties with particular reference to water content was applicably determined using integration of laboratory and field ERV data analysis thus able to compliment conventional approach due to its economic, fast and wider data coverage.
The Onsala Twin Telescope Project
NASA Astrophysics Data System (ADS)
Haas, R.
2013-08-01
This paper described the Onsala Twin Telescope project. The project aims at the construction of two new radio telescopes at the Onsala Space Observatory, following the VLBI2010 concept. The project starts in 2013 and is expected to be finalized within 4 years. Z% O. Rydbeck. Chalmers Tekniska Högskola, Göteborg, ISBN 91-7032-621-5, 407-823, 1991. B. Petrachenko, A. Niell, D. Behrend, B. Corey, J. Böhm, P. Charlot, A. Collioud, J. Gipson, R. Haas, Th. Hobiger, Y. Koyama, D. MacMillan, Z. Malkin, T. Nilsson, A. Pany, G. Tuccari, A. Whitney, and J. Wresnik. Design Aspects of the VLBI2010 System. NASA/TM-2009-214180, 58 pp., 2009. R. Haas, G. Elgered, J. Löfgren, T. Ning, and H.-G. Scherneck. Onsala Space Observatory - IVS Network Station. In K. D. Baver and D. Behrend, editors, International VLBI Service for Geodesy and Astrometry 2011 Annual Report, NASA/TP-2012-217505, 88-91, 2012. H.-G. Scherneck, G. Elgered, J. M. Johansson, and B. O. Rönnäng. Phys. Chem. Earth, Vol. 23, No. 7-8, 811-823, 1998. A. R. Whitney. Ph.D. thesis, Dept. of Electrical engineering, MIT Cambridge, MA., 1974. B. A. Harper, J. D. Kepert, and J. D. Ginger. Guidelines for converting between various wind averaging periods in tropical cyclone conditions. WMO/TD-No. 1555, 64 pp., 2010 (available at \\url{http://www.wmo.int/pages/prog/www/tcp/documents/WMO_TD_1555_en.pdf})
In-Beam Studies of High-Spin States in Mercury -183 and MERCURY-181
NASA Astrophysics Data System (ADS)
Shi, Detang
The high-spin states of ^{183 }Hg were studied by using the reaction ^{155}Gd(^{32}S, 4n)^{183}Hg at a beam energy of 160 MeV with the tandem-linac accelerator system and the multi-element gamma-ray detection array at Florida State University. Two new bands, consisting of stretched E2 transitions and connected by M1 inter-band transitions, were identified in ^{183}Hg. Several new levels were added to the previously known bands at higher spin. The spins and parities to the levels in ^{183}Hg were determined from the analysis of their DCO ratios and B(M1)/B(E2) ratios. While the two pairs of previously known bands in ^ {183}Hg were proposed to 7/2^ -[514] and 9/2^+ [624], the two new bands are assigned as the 1/2^-[521] ground state configuration based upon the systematics of Nilsson orbitals in this mass region. The 354-keV transition previously was considered to be an E2 transition and assigned as the only transition from a band which is built on an oblate deformed i_{13/2} isomeric state. However, our DCO ratio analysis indicates that the 354-keV gamma-ray is an M1 transition. This changes the decay pattern of the 9/2^+[624 ] prolate structure in ^ {183}Hg, so it is seen to feed only into the i_{13/2} isomer band head. Our knowledge of the mercury nuclei far from stability was then extended through an in-beam study of the reaction ^{144}Sm(^{40 }Ar, 3n)^{181}Hg by using the Fragment Mass Analyzer (FMA) and the ten-Compton-suppressed -germanium-detector system at Argonne National Laboratory. Band structures to high-spin states are established for the first time in ^{181}Hg in the present experiment. The observed level structure of ^{181}Hg is midway between those in ^{185}Hg and in ^{183}Hg. The experimental results are analyzed in the framework of the cranking shell model (CSM). Alternative theoretical explanations are also presented and discussed. Systematics of neighboring mercury isotopes and N = 103 isotones is analyzed.
Decelerations of Parachute Opening Shock in Skydivers.
Gladh, Kristofer; Lo Martire, Riccardo; Äng, Björn O; Lindholm, Peter; Nilsson, Jenny; Westman, Anton
2017-02-01
High prevalence of neck pain among skydivers is related to parachute opening shock (POS) exposure, but few investigations of POS deceleration have been made. Existing data incorporate equipment movements, limiting its representability of skydiver deceleration. This study aims to describe POS decelerations and compare human- with equipment-attached data. Wearing two triaxial accelerometers placed on the skydiver (neck-sensor) and equipment (rig-sensor), 20 participants made 2 skydives each. Due to technical issues, data from 35 skydives made by 19 participants were collected. Missing data were replaced using data substitution techniques. Acceleration axes were defined as posterior to anterior (+ax), lateral right (+ay), and caudal to cranial (+az). Deceleration magnitude [amax (G)] and jerks (G · s-1) during POS were analyzed. Two distinct phases related to skydiver positioning and acceleration direction were observed: 1) the x-phase (characterized by -ax, rotating the skydiver); and 2) the z-phase (characterized by +az, skydiver vertically oriented). Compared to the rig-sensor, the neck-sensor yielded lower amax (3.16 G vs. 6.96 G) and jerk (56.3 G · s-1 vs. 149.0 G · s-1) during the x-phase, and lower jerk (27.7 G · s-1 vs. 54.5 G · s-1) during the z-phase. The identified phases during POS should be considered in future neck pain preventive strategies. Accelerometer data differed, suggesting human-placed accelerometry to be more valid for measuring human acceleration.Gladh K, Lo Martire R, Äng BO, Lindholm P, Nilsson J, Westman A. Decelerations of parachute opening shock in skydivers. Aerosp Med Hum Perform. 2017; 88(2):121-127.
OVERTURNING THE CASE FOR GRAVITATIONAL POWERING IN THE PROTOTYPICAL COOLING LYα NEBULA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Moire K. M.; Fynbo, Johan P. U.; Momcheva, Ivelina
The Nilsson et al. Lyα nebula has often been cited as the most plausible example of an Lyα nebula powered by gravitational cooling. In this paper, we bring together new data from the Hubble Space Telescope and the Herschel Space Observatory as well as comparisons to recent theoretical simulations in order to revisit the questions of the local environment and most likely power source for the Lyα nebula. In contrast to previous results, we find that this Lyα nebula is associated with six nearby galaxies and an obscured AGN that is offset by ∼4″ ≈ 30 kpc from the Lyαmore » peak. The local region is overdense relative to the field, by a factor of ∼10, and at low surface brightness levels the Lyα emission appears to encircle the position of the obscured AGN, highly suggestive of a physical association. At the same time, we confirm that there is no compact continuum source located within ∼2–3″ ≈ 15–23 kpc of the Lyα peak. Since the latest cold accretion simulations predict that the brightest Lyα emission will be coincident with a central growing galaxy, we conclude that this is actually a strong argument against, rather than for, the idea that the nebula is gravitationally powered. While we may be seeing gas within cosmic filaments, this gas is primarily being lit up, not by gravitational energy, but due to illumination from a nearby buried AGN.« less
NASA Astrophysics Data System (ADS)
Russell, Fiona; Chiverrell, Richard; Boyle, John
2016-04-01
Monitoring programmes have shown increases in concentrations of dissolved organic matter (DOM) in the surface waters of northern and central Europe (Monteith et al. 2007), and negative impacts of the browning of river waters have been reported for fish populations (Jonsson et al. 2012; Ranaker et al. 2012) and for ecosystem services such as water treatment (Tuvendal and Elmqvist 2011). Still the exact causes of the recent browning remain uncertain, the main contenders being climate change (Evans et al. 2005) and reduced ionic strength in surface water resulting from declines in anthropogenic sulphur and sea salt deposition (Monteith et al. 2007). There is a need to better understand the pattern, drivers and trajectory of these increases in DOC and POC in both recent and longer-term (Holocene) contexts to improve the understanding of carbon cycling within lakes and their catchments. In Britain there are some ideal sites for testing whether these trends are preserved and developing methods for reconstructing organic fluxes from lake sedimentary archives. There is a suite of lakes distributed across the country, the UK Acid Waters Monitoring Network (UKAWMN) sites, which have been monitored monthly for dissolved organic carbon and other aqueous species since 1988. These 12 lakes have well studied recent and in some case whole Holocene sediment records. Here four of those lakes (Grannoch, Chon, Scoat Tarn and Cwm Mynach) are revisited, with sampling focused on the sediment-water interface and very recent sediments (approx.150 years). At Scoat Tarn (approx. 1000 years) and Llyn Mynach (11.5k years) longer records have been obtained to assess equivalent patterns through the Holocene. Analyses of the gravity cores have focused on measuring and characterising the organic content for comparison with recorded surface water DOC measurements (UKAWMN). Data from pyrolysis measurements (TGA/DSC) in an N atmosphere show that the mass loss between 330-415°C correlates well with observed trends in DOC of surface waters. Analysis of these cores and various calibration materials (e.g. peat) suggests plant tissue undergoes pyrolysis at lower temperatures, and though humic substances can be generated in the lake this thermal phase may be a proxy record for catchment derived DOC. NIR and FTIR spectrometry data further characterise this organic phase, identify spectral structures that also correlate with monitored DOC. Together the pyrolysis, NIR, FTIR and XRF geochemistry (e.g. Fe/Mn, Si/Al ratios) data show also information on lake productivity, biogenic silica and mass accumulation rates. To explore the longer timescale equivalent proxy records have been trialled at Llyn Cwm Mynach and show possible phases of elevated DOC fluxes from catchment soils during the Holocene. References Evans C.D., Monteith D.T. and Cooper D.M. 2005. Long-term increases in surface water dissolved organic carbon: Observations, possible causes and environmental impacts. Environ. Pollut. 137: 55-71. Jonsson M., Ranaker L., Nilsson P.A. and Bronmark C. 2012. Prey-type-dependent foraging of young-of-the-year fish in turbid and humic environments. Ecol. Freshw. Fish 21: 461-468. Monteith D.T., Stoddard J.L., Evans C.D., de Wit H.A., Forsius M., Hogasen T., Wilander A., Skjelkvale B.L., Jeffries D.S., Vuorenmaa J., Keller B., Kopacek J. and Vesely J. 2007. Dissolved organic carbon trends resulting from changes in atmospheric deposition chemistry. Nature 450: 537-U539. Ranaker L., Jonsson M., Nilsson P.A. and Bronmark C. 2012. Effects of brown and turbid water on piscivore-prey fish interactions along a visibility gradient. Freshwater Biol. 57: 1761-1768. Tuvendal M. and Elmqvist T. 2011. Ecosystem Services Linking Social and Ecological Systems: River Brownification and the Response of Downstream Stakeholders. Ecol. Soc. 16
Traffic-Related Air Pollution and Dementia Incidence in Northern Sweden: A Longitudinal Study
Oudin, Anna; Forsberg, Bertil; Adolfsson, Annelie Nordin; Lind, Nina; Modig, Lars; Nordin, Maria; Nordin, Steven; Adolfsson, Rolf; Nilsson, Lars-Göran
2015-01-01
Background Exposure to ambient air pollution is suspected to cause cognitive effects, but a prospective cohort is needed to study exposure to air pollution at the home address and the incidence of dementia. Objectives We aimed to assess the association between long-term exposure to traffic-related air pollution and dementia incidence in a major city in northern Sweden. Methods Data on dementia incidence over a 15-year period were obtained from the longitudinal Betula study. Traffic air pollution exposure was assessed using a land-use regression model with a spatial resolution of 50 m × 50 m. Annual mean nitrogen oxide levels at the residential address of the participants at baseline (the start of follow-up) were used as markers for long-term exposure to air pollution. Results Out of 1,806 participants at baseline, 191 were diagnosed with Alzheimer’s disease during follow-up, and 111 were diagnosed with vascular dementia. Participants in the group with the highest exposure were more likely than those in the group with the lowest exposure to be diagnosed with dementia (Alzheimer’s disease or vascular dementia), with a hazard ratio (HR) of 1.43 (95% CI: 0.998, 2.05 for the highest vs. the lowest quartile). The estimates were similar for Alzheimer’s disease (HR 1.38) and vascular dementia (HR 1.47). The HR for dementia associated with the third quartile versus the lowest quartile was 1.48 (95% CI: 1.03, 2.11). A subanalysis that excluded a younger sample that had been retested after only 5 years of follow-up suggested stronger associations with exposure than were present in the full cohort (HR = 1.71; 95% CI: 1.08, 2.73 for the highest vs. the lowest quartile). Conclusions If the associations we observed are causal, then air pollution from traffic might be an important risk factor for vascular dementia and Alzheimer’s disease. Citation Oudin A, Forsberg B, Nordin Adolfsson A, Lind N, Modig L, Nordin M, Nordin S, Adolfsson R, Nilsson LG. 2016. Traffic-related air pollution and dementia incidence in northern Sweden: a longitudinal study. Environ Health Perspect 124:306–312; http://dx.doi.org/10.1289/ehp.1408322 PMID:26305859
Flying with the winds: differential migration strategies in relation to winds in moth and songbirds.
Åkesson, Susanne
2016-01-01
The gamma Y moth selects to migrate in stronger winds compared to songbirds, enabling fast transport to distant breeding sites, but a lower precision in orientation as the moth allows itself to be drifted by the winds. Photo: Ian Woiwod. In Focus: Chapman, J.R., Nilsson, C., Lim, K.S., Bäckman, J., Reynolds, D.R. & Alerstam, T. (2015) Adaptive strategies in nocturnally migrating insects and songbirds: contrasting responses to winds. Journal of Animal Ecology, In press Insects and songbirds regularly migrate long distances across continents and seas. During these nocturnal migrations, they are exposed to a fluid medium, the air, in which they transport themselves by flight at similar speeds as the winds may carry them. It is crucial for an animal to select the most favourable flight conditions relative to winds to minimize the distance flown on a given amount of fuel and to avoid hazardous situations. Chapman et al. (2015a) showed contrasting strategies in how moths initiate migration predominantly under tailwind conditions, allowing themselves to drift to a larger extent and gain ground speed as compared to nocturnal songbird migrants. The songbirds use more variable flight strategies in relation to winds, where they sometimes allow themselves to drift, and at other occasions compensate for wind drift. This study shows how insects and birds have differentially adapted to migration in relation to winds, which is strongly dependent on their own flight capability, with higher flexibility enabling fine-tuned responses to keep a time programme and reach a goal in songbirds compared to in insects. © 2015 The Author. Journal of Animal Ecology © 2015 British Ecological Society.
Final repository for Denmark's low- and intermediate level radioactive waste
NASA Astrophysics Data System (ADS)
Nilsson, B.; Gravesen, P.; Petersen, S. S.; Binderup, M.
2012-12-01
Bertel Nilsson*, Peter Gravesen, Stig A. Schack Petersen, Merete Binderup Geological Survey of Denmark and Greenland (GEUS), Øster Voldgade 10, 1350 Copenhagen, Denmark, * email address bn@geus.dk The Danish Parliament decided in 2003 that the temporal disposal of the low- and intermediate level radioactive waste at the nuclear facilities at Risø should find another location for a final repository. The Danish radioactive waste must be stored on Danish land territory (exclusive Greenland) and must hold the entire existing radioactive waste, consisting of the waste from the decommissioning of the nuclear facilities at Risø, and the radioactive waste produced in Denmark from hospitals, universities and industry. The radioactive waste is estimated to a total amount of up to 10,000 m3. The Geological Survey of Denmark and Greenland, GEUS, is responsible for the geological studies of suitable areas for the repository. The task has been to locate and recognize non-fractured Quaternary and Tertiary clays or Precambrian bedrocks with low permeability which can isolate the radioactive waste from the surroundings the coming more than 300 years. Twenty two potential areas have been located and sequential reduced to the most favorable two to three locations taking into consideration geology, hydrogeology, nature protection and climate change conditions. Further detailed environmental and geology investigations will be undertaken at the two to three potential localities in 2013 to 2015. This study together with a study of safe transport of the radioactive waste and an investigation of appropriate repository concepts in relation to geology and safety analyses will constitute the basis upon which the final decision by the Danish Parliament on repository concept and repository location. The final repository is planned to be established and in operation at the earliest 2020.
Hey Teacher, Don't Leave Them Kids Alone: Action Is Better for Memory than Reading.
Hainselin, Mathieu; Picard, Laurence; Manolli, Patrick; Vankerkore-Candas, Sophie; Bourdin, Béatrice
2017-01-01
There is no consensus on how the enactment effect (EE), although it is robust, enhances memory. Researchers are currently investigating the cognitive processes underlying this effect, mostly during adulthood; the link between EE and crucial function identified in adulthood such as episodic memory and binding process remains elusive. Therefore, this study aims to verify the existence of EE in 6-10 years old and assess cognitive functions potentially linked to this effect in order to shed light on the mechanisms underlying the EE during childhood. Thirty-five children (15 second graders and 20 fifth graders) were included in this study. They encoded 24 action phrases from a protocol adapted from Hainselin et al. (2014). Encoding occurred under four conditions: Verbal Task, Listening Task, Experimenter-Performed Task, and Subject-Performed Task. Memory performance was assessed for free and cued recall, as well as source memory abilities. ANOVAS were conducted to explore age-related effects on the different scores according to encoding conditions. Correlations between EE scores (Subject-Performed Task/Listening Task) and binding memory scores (short-term binding and episodic memory) were run. Both groups benefited from EE. However, in both groups, performance did not significantly differ between Subject-Performed Task and Experimenter-Performed Task. A positive correlation was found between EE and episodic memory score for second graders and a moderate negative correlation was found between EE and binding scores for fifth graders. Our results confirm the existence of EE in 6 and 10 year olds, but they do not support the multimodal theory (Engelkamp, 2001) or the "glue" theory (Kormi-Nouri and Nilsson, 2001). This suggests instead that episodic memory might not underlie EE during early childhood.
Mikołajczyk-Bator, Katarzyna; Pawlak, Sylwia
2016-01-01
Increased consumption of fruits and vegetables significantly reduces the risk of cardio-vascular disease. This beneficial effect on the human organism is ascribed to the antioxidant compounds these foods contain. Unfortunately, many products, particularly vegetables, need to be subjected to thermal processing before consumption. The aim of this study was to determine the effect of such thermal treatment on the antioxidant capacity and pigment contents in separated fractions of violet pigments (betacyanins) and yellow pigments (betaxanthins and betacyanins). Fractions of violet and yellow pigments were obtained by separation of betalain pigments from fresh roots of 3 red beet cultivars using column chromatography and solid phase extraction (SPE). The betalain pigment content was determined in all samples before and after thermal treatment (90°C/30 min) by spectrophotometry, according to Nilsson's method [1970] and antioxidant capacity was assessed based on ABTS. Betalain pigments in the separated fractions were identified using HPLC-MS. After thermal treatment of betacyanin fractions a slight, but statistically significant degradation of pigments was observed, while the antioxidant capacity of these fractions did not change markedly. Losses of betacyanin content amounted to 13-15% depending on the cultivar, while losses of antioxidant capacity were approx. 7%. HPLC/MS analyses showed that before heating, betanin was the dominant pigment in the betacyanin fraction, while after heating it was additionally 15-decarboxy-betanin. Isolated fractions of yellow pigments in red beets are three times less heat-resistant than betacyanin fractions. At losses of yellow pigment contents in the course of thermal treatment reaching 47%, antioxidant capacity did not change markedly (a decrease by approx. 5%). In the yellow pigment fractions neobetanin was the dominant peak in the HPLC chromatogram, while vulgaxanthin was found in a much smaller area, whereas after heating additionally 2-decarboxy-2,3-dehydro-neobetanin was detected. Both groups of betalain pigments (betacyanins and betaxanthins) exhibit antioxidant capacity before and after heating. Violet beatacyjanins are 3 times more stable when heated than yellow betaxanthins.
Cold Ion Escape from the Martian Ionosphere
NASA Astrophysics Data System (ADS)
Fränz, Markus; Dubinin, Eduard; Andrews, David; Nilsson, Hans; Fedorov, Andrei
2014-05-01
It has always been challenging to observe the flux of ions with energies of less than 10eV escaping from the planetary ionospheres. We here report on new measurements of the ionospheric ion flows at Mars by the ASPERA-3 experiment on board Mars Express. The ion sensor IMA of this experiment has in principle a low-energy cut-off at 10eV but in negative spacecraft charging cold ions are lifted into the range of measurement but the field of view is restricted to about 4x360 deg. In a recent paper Nilsson et al. (Earth Planets Space, 64, 135, 2012) tried to use the method of long-time averaged distribution functions to overcome these constraints. In this paper we first use the same method to show that we get results consistent with this when using ASPERA-3 observations only. But then we can show that these results are inconsistent with observations of the local plasma density by the MARSIS radar instrument on board Mars Express. We demonstrate that the method of averaged distribution function can deliver the mean flow speed of the plasma but the low-energy cut-off does usually not allow to reconstruct the density. We then combine measurements of the cold ion flow speed with the plasma density observations of MARSIS to derive the cold ion flux. In an analysis of the combined nightside datasets we show that the main escape channel is along the shadow boundary on the tailside of Mars. At a distance of about 0.5 Martian radii the flux settles at a constant value which indicates that about half of the transterminator ionospheric flow escapes from the planet. Possible mechanism to generate this flux can be the ionospheric pressure gradient between dayside and nightside or momentum transfer from the solar wind via the induced magnetic field since the flow velocity is in the Alfvénic regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nilsson, Mikael
This 3-year project was a collaboration between University of California Irvine (UC Irvine), Pacific Northwest National Laboratory (PNNL), Idaho National Laboratory (INL), Argonne National Laboratory (ANL) and with an international collaborator at ForschungZentrum Jülich (FZJ). The project was led from UC Irvine under the direction of Profs. Mikael Nilsson and Hung Nguyen. The leads at PNNL, INL, ANL and FZJ were Dr. Liem Dang, Dr. Peter Zalupski, Dr. Nathaniel Hoyt and Dr. Giuseppe Modolo, respectively. Involved in this project at UC Irvine were three full time PhD graduate students, Tro Babikian, Ted Yoo, and Quynh Vo, and one MS student,more » Alba Font Bosch. The overall objective of this project was to study how the kinetics and thermodynamics of metal ion extraction can be described by molecular dynamic (MD) simulations and how the simulations can be validated by experimental data. Furthermore, the project includes the applied separation by testing the extraction systems in a single stage annular centrifugal contactor and coupling the experimental data with computational fluid dynamic (CFD) simulations. Specific objectives of the proposed research were: Study and establish a rigorous connection between MD simulations based on polarizable force fields and extraction thermodynamic and kinetic data. Compare and validate CFD simulations of extraction processes for An/Ln separation using different sizes (and types) of annular centrifugal contactors. Provide a theoretical/simulation and experimental base for scale-up of batch-wise extraction to continuous contactors. We approached objective 1 and 2 in parallel. For objective 1 we started by studying a well established extraction system with a relatively simple extraction mechanism, namely tributyl phosphate. What we found was that well optimized simulations can inform experiments and new information on TBP behavior was presented in this project, as well be discussed below. The second objective proved a larger challenge and most of the efforts were devoted to experimental studies.« less
From Fertilization to Birth: Representing Development in High School Biology Textbooks
NASA Astrophysics Data System (ADS)
Wellner, Karen L.
Biology textbooks are everybody's business. In accepting the view that texts are created with specific social goals in mind, I examined 127 twentieth-century high school biology textbooks for representations of animal development. Paragraphs and visual representations were coded and placed in one of four scientific literacy categories: descriptive, investigative, nature of science, and human embryos, technology, and society (HETS). I then interpreted how embryos and fetuses have been socially constructed for students. I also examined the use of Haeckel's embryo drawings to support recapitulation and evolutionary theory. Textbooks revealed that publication of Haeckel's drawings was influenced by evolutionists and anti-evolutionists in the 1930s, 1960s, and the 1990s. Haeckel's embryos continue to persist in textbooks because they "safely" illustrate similarities between embryos and are rarely discussed in enough detail to understand comparative embryology's role in the support of evolution. Certain events coincided with changes in how embryos were presented: (a) the growth of the American Medical Association (AMA) and an increase in birth rates (1950s); (b) the Biological Sciences Curriculum Study (BSCS) and public acceptance of birth control methods (1960s); (c) Roe vs. Wade (1973); (d) in vitro fertilization and Lennart Nilsson's photographs (1970s); (e) prenatal technology and fetocentrism (1980s); and (f) genetic engineering and Science-Technology-Society (STS) curriculum (1980s and 1990s). By the end of the twentieth century, changing conceptions, research practices, and technologies all combined to transform the nature of biological development. Human embryos went from a highly descriptive, static, and private object to that of sometimes contentious public figure. I contend that an ignored source for helping move embryos into the public realm is schoolbooks. Throughout the 1900s, authors and publishers accomplished this by placing biology textbook embryos and fetuses in several different contexts--biological, technological, experimental, moral, social, and legal.
NASA Astrophysics Data System (ADS)
Andersson, Jafet; Arheimer, Berit
2017-04-01
This poster will give three examples of popular water-management methods, which we discovered had very little effect in practice because they were applied on irrelevant scales. They all use small scale solutions to large scale problems, and did not provide expected results due to neglecting the magnitude of components in the large-scale water budget. 1) Flood prevention: ponds are considered to be able to buffer water discharge in catchments and was suggested as a measure to reduce the 20-years return floods in an exposed areas in Sweden. However, when experimenting with several ponds allocation and size in a computational model, we found out that ponds had to cover 5-10% of the catchment to convert the 20-yr flood into an average flood. Most effective was to allocate one single water body at the catchment outlet, but this would correspond to 95 km2 which is by far too big to be called a pond. 2) Water Harvesting: At small-scale it is designed to increase water availability and agricultural productivity in smallholder agriculture. On field scale, we show that water harvesting decreases runoff by 55% on average in 62 investigated field-scale trials of drainage area ≤ 1ha in sub-Saharan Africa (Andersson et al., 2011). When upscaling, to river basin scale in South Africa (8-1.8×106 km2), using a scenario approach and the SWAT hydrological model we found that all smallholder fields would not significantly alter downstream river discharge (<0.3% change on average with some effect on low flows). It shows some potential to increase crop yields but only in some water-scarce areas and conditioned on sufficient fertilizers being available (Andersson et al., 2013). 3) Eutrophication control: Constructed wetlands are supposed to remove nutrients from surface water and therefore 1,574 wetlands were constructed in southern Sweden during the years 1996-2006 as a measure to reduce coastal eutrophication. From our detailed calculations, the gross removal was estimated at 140 tonnes Nitrogen per year and 12 tonnes Phosphorus per year in these wetlands. However, this only reduced the load to the sea by 0.2% for nitrogen and 0.5% for phosphorus (Arheimer and Pers, 2016). The wetland area was minor compared to the total area and load (41 km2 vs. 164,000 km2). For the eventual effect in the coast, additional consideration must be taken to the coastal nutrient balance as inflow from the sea may effluent the effect, even in protected archipelagos and semi-enclosed bays (Arheimer et al, 2015). References: Andersson JCM, Zehnder AJB, Wehrli B, et al. (2013). Improving crop yield and water productivity …. Environmental Science & Technology, 47(9), pp. 4341-4348. http://dx.doi.org/10.1021/es304585p Andersson JCM, Zehnder AJB, Rockström J, Yang H (2011). Potential impacts of water harvesting…. Agricultural Water Management, 98(7), pp. 1113-1124, http://dx.doi.org/10.1016/j.agwat.2011.02.004 Arheimer, B., Nilsson, J. and Lindström, G. 2015. Experimenting with Coupled Hydro-Ecological Models ….. Water 7(7):3906-3924. doi:10.3390/w7073906 Arheimer, B. and Pers B.C. 2016. Lessons learned? …. Ecological Engineering (in press). doi:10.1016/j.ecoleng.2016.01.088
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zucca, J J; Walter, W R; Rodgers, A J
2008-11-19
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring andmore » seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags contemporary research. Current monitoring practice is to use relatively simple Earth models that generally afford analytical prediction of seismic observables (see Examples of Current Monitoring Practice below). Empirical relationships or corrections to predictions are often used to account for unmodeled phenomena, such as the generation of S-waves from explosions or the effect of 3-dimensional Earth structure on wave propagation. This approach produces fast and accurate predictions in areas where empirical observations are available. However, accuracy may diminish away from empirical data. Further, much of the physics is wrapped into an empirical relationship or correction, which limits the ability to fully understand the physical processes underlying the seismic observation. Every generation of seismology researchers works toward quantitative results, with leaders who are active at or near the forefront of what has been computationally possible. While recognizing that only a 3-dimensional model can capture the full physics of seismic wave generation and propagation in the Earth, computational seismology has, until recently, been limited to simplifying model parameterizations (e.g. 1D Earth models) that lead to efficient algorithms. What is different today is the fact that the largest and fastest machines are at last capable of evaluating the effects of generalized 3D Earth structure, at levels of detail that improve significantly over past efforts, with potentially wide application. Advances in numerical methods to compute travel times and complete seismograms for 3D models are enabling new ways to interpret available data. This includes algorithms such as the Fast Marching Method (Rawlison and Sambridge, 2004) for travel time calculations and full waveform methods such as the spectral element method (SEM; Komatitsch et al., 2002, Tromp et al., 2005), higher order Galerkin methods (Kaser and Dumbser, 2006; Dumbser and Kaser, 2006) and advances in more traditional Cartesian finite difference methods (e.g. Pitarka, 1999; Nilsson et al., 2007). The ability to compute seismic observables using a 3D model is only half of the challenge; models must be developed that accurately represent true Earth structure. Indeed, advances in seismic imaging have followed improvements in 3D computing capability (e.g. Tromp et al., 2005; Rawlinson and Urvoy, 2006). Advances in seismic imaging methods have been fueled in part by theoretical developments and the introduction of novel approaches for combining different seismological observables, both of which can increase the sensitivity of observations to Earth structure. Examples of such developments are finite-frequency sensitivity kernels for body-wave tomography (e.g. Marquering et al., 1998; Montelli et al., 2004) and joint inversion of receiver functions and surface wave group velocities (e.g. Julia et al., 2000).« less
Estimated cumulative sediment trapping in future hydropower reservoirs in Africa
NASA Astrophysics Data System (ADS)
Lucía, Ana; Berlekamp, Jürgen; Zarfl, Christiane
2017-04-01
Despite a rapid economic development in Sub-Saharan Africa, almost 70% of the human population in this area remain disconnected from electricity access (International Energy Agency 2011). Mitigating climate change and a search for renewable, "climate neutral" electricity resources are additional reasons why Africa will be one key centre for future hydropower dam building, with only 8% of the technically feasible hydropower potential actually exploited. About 300 major hydropower dams with a total capacity of 140 GW are currently under construction (11.4%) or planned (88.6%) (Zarfl et al. 2015). Despite the benefits of hydropower dams, fragmentation of the rivers changes the natural flow, temperature and sediment regime. This has consequences for a high number of people that directly depend on the primary sector linked to rivers and floodplains. But sediment trapping in the reservoir also affects dam operation and decreases its life span. Thus, the objective of this work is to quantify the dimension of sediment trapping by future hydropower dams in African river basins. Soil erosion is described with the universal soil loss equation (Wischmeier & Smith 1978) and combined with the connectivity index (Cavalli et al. 2013) to estimate the amount of eroded soil that reaches the fluvial network and finally ends up in the existing (Lehner et al. 2011) and future reservoirs (Zarfl et al. 2015) per year. Different scenarios assuming parameter values from the literature are developed to include model uncertainty. Estimations for existing dams will be compared with literature data to evaluate the applied estimation method and scenario assumptions. Based on estimations for the reservoir volume of the future dams we calculated the potential time-laps of the future reservoirs due to soil erosion and depending on their planned location. This approach could support sustainable decision making for the location of future hydropower dams. References Cavalli, M., Trevisani, S., Comiti, F., & Marchi, L. (2013). Geomorphometric assessment of spatial sediment connectivity in small Alpine catchments. Geomorphology, 188, 31-41. Lehner, B., Liermann, C. R., Revenga, C., Vörösmarty, C., Fekete, B., Crouzet, P., Döll, P., Endejan, M., Frenken, K., Magome, J., Nilsson, C., Robertson, J.C., Rödel, R., Sindorf , N., & Wisser, D. (2011). High-resolution mapping of the world's reservoirs and dams for sustainable river-flow management. Frontiers in Ecology and the Environment, 9(9), 494-502. Wischmeier, W. H. and D. D. Smith. (1978). Predicting rainfall erosion losses: guide to conservation planning. USDA, Agriculture Handbook 537. U.S. Government Printing Office, Washington, DC. Zarfl, C., Lumsdon, A. E., Berlekamp, J., Tydecks, L., & Tockner, K. (2015). A global boom in hydropower dam construction. Aquatic Sciences, 77(1), 161-170.
The effect of childhood bilingualism on episodic and semantic memory tasks.
Kormi-Nouri, Reza; Shojaei, Razie-Sadat; Moniri, Sadegheh; Gholami, Ali-Reza; Moradi, Ali-Reza; Akbari-Zardkhaneh, Saeed; Nilsson, Lars-Göran
2008-04-01
Kormi-Nouri, Moniri and Nilsson (2003) demonstrated that Swedish-Persian bilingual children recalled at a higher level than Swedish monolingual children, when they were tested using Swedish materials. The present study was designed to examine the bilingual advantage of children who use different languages in their everyday life but have the same cultural background and live in their communities in the same way as monolingual children. In four experiments, 488 monolingual and bilingual children were compared with regard to episodic and semantic memory tasks. In experiments 1 and 2 there were 144 boys and 144 girls in three school groups (aged 9-10 years, 13-14 years and 16-17 years) and in three language groups (Persian monolingual, Turkish-Persian bilingual, and Kurdish-Persian bilingual). In experiments 3 and 4, there were 200 male students in two school groups (aged 9-10 years and 16-17 years) and in two language groups (Persian monolingual and Turkish-Persian bilingual). In the episodic memory task, children learned sentences (experiments 1-3) and words (Experiment 4). Letter and category fluency tests were used as measures of semantic memory. To change cognitive demands in memory tasks, in Experiment 1, the integration of nouns and verbs within sentences was manipulated by the level of association between verb and noun in each sentence. At retrieval, a recognition test was used. In experiments 2 and 3, the organization between sentences was manipulated at encoding in Experiment 2 and at both encoding and retrieval in Experiment 3 through the use of categories among the objects. At retrieval, free recall or cued recall tests were employed. In Experiment 4, the bilingual children were tested with regard to both their first and their second language. In all four experiments, a positive effect of bilingualism was found on episodic and semantic memory tasks; the effect was more pronounced for older than younger children. The bilingual advantage was not affected by changing cognitive demands or by using first/second language in memory tasks. The present findings support the cross-language interactivity hypothesis of bilingual advantage.
Cold Ion Escape from the Martian Ionosphere - 2005-2014
NASA Astrophysics Data System (ADS)
Fränz, Markus; Dubinin, Eduard; Andrews, David; Nilsson, Hans; Fedorov, Andrei
2015-04-01
It has always been challenging to observe the flux of ions with energies of less than 10eV escaping from the planetary ionospheres. We here report on new measurements of the ionospheric ion flows at Mars by the ASPERA-3 experiment on board Mars Express. The ion sensor IMA of this experiment has in principle a low-energy cut-off at 10eV but in negative spacecraft charging cold ions are lifted into the range of measurement but the field of view is restricted to about 4x360 deg. In a recent paper Nilsson et al. (Earth Planets Space, 64, 135, 2012) tried to use the method of long-time averaged distribution functions to overcome these constraints. In this paper we first use the same method to show that we get results consistent with this when using ASPERA-3 observations only. But then we can show that these results are inconsistent with observations of the local plasma density by the MARSIS radar instrument on board Mars Express. We demonstrate that the method of averaged distribution function can deliver the mean flow speed of the plasma but the low-energy cut-off does usually not allow to reconstruct the density. We then combine measurements of the cold ion flow speed with the plasma density observations of MARSIS to derive the cold ion flux. In an analysis of the combined nightside datasets we show that the main escape channel is along the shadow boundary on the tailside of Mars. At a distance of about 0.5 RM the flux settles at a constant value which indicates that about half of the transterminator ionospheric flow escapes from the planet. To derive the mean escape flux we include all combined observations of ASPERA-3 and MARSIS from 2005 to 2014. Possible mechanism to generate this flux can be the ionospheric pressure gradient between dayside and nightside or momentum transfer from the solar wind via the induced magnetic field since the flow velocity is in the Alfvénic regime.
Mismatch between the eye and the optic lobe in the giant squid.
Liu, Yung-Chieh; Liu, Tsung-Han; Yu, Chun-Chieh; Su, Chia-Hao; Chiao, Chuan-Chin
2017-07-01
Giant squids ( Architeuthis ) are a legendary species among the cephalopods. They live in the deep sea and are well known for their enormous body and giant eyes. It has been suggested that their giant eyes are not adapted for the detection of either mates or prey at distance, but rather are best suited for monitoring very large predators, such as sperm whales, at distances exceeding 120 m and at a depth below 600 m (Nilsson et al. 2012 Curr. Biol. 22 , 683-688. (doi:10.1016/j.cub.2012.02.031)). However, it is not clear how the brain of giant squids processes visual information. In this study, the optic lobe of a giant squid ( Architeuthis dux , male, mantle length 89 cm), which was caught by local fishermen off the northeastern coast of Taiwan, was scanned using high-resolution magnetic resonance imaging in order to examine its internal structure. It was evident that the volume ratio of the optic lobe to the eye in the giant squid is much smaller than that in the oval squid ( Sepioteuthis lessoniana ) and the cuttlefish ( Sepia pharaonis ). Furthermore, the cell density in the cortex of the optic lobe is significantly higher in the giant squid than in oval squids and cuttlefish, with the relative thickness of the cortex being much larger in Architeuthis optic lobe than in cuttlefish. This indicates that the relative size of the medulla of the optic lobe in the giant squid is disproportionally smaller compared with these two cephalopod species. This morphological study of the giant squid brain, though limited only to the optic lobe, provides the first evidence to support that the optic lobe cortex, the visual information processing area in cephalopods, is well developed in the giant squid. In comparison, the optic lobe medulla, the visuomotor integration centre in cephalopods, is much less developed in the giant squid than other species. This finding suggests that, despite the giant eye and a full-fledged cortex within the optic lobe, the brain of giant squids has not evolved proportionally in terms of performing complex tasks compared with shallow-water cephalopod species.
Hsu, Wei-Chun J.; Scala, Federico; Nenov, Miroslav N.; Wildburger, Norelle C.; Elferink, Hannah; Singh, Aditya K.; Chesson, Charles B.; Buzhdygan, Tetyana; Sohail, Maveen; Shavkunov, Alexander S.; Panova, Neli I.; Nilsson, Carol L.; Rudra, Jai S.; Lichti, Cheryl F.; Laezza, Fernanda
2016-01-01
Recent data shows that fibroblast growth factor 14 (FGF14) binds to and controls the function of the voltage-gated sodium (Nav) channel with phenotypic outcomes on neuronal excitability. Mutations in the FGF14 gene in humans have been associated with brain disorders that are partially recapitulated in Fgf14−/− mice. Thus, signaling pathways that modulate the FGF14:Nav channel interaction may be important therapeutic targets. Bioluminescence-based screening of small molecule modulators of the FGF14:Nav1.6 complex identified 4,5,6,7-tetrabromobenzotriazole (TBB), a potent casein kinase 2 (CK2) inhibitor, as a strong suppressor of FGF14:Nav1.6 interaction. Inhibition of CK2 through TBB reduces the interaction of FGF14 with Nav1.6 and Nav1.2 channels. Mass spectrometry confirmed direct phosphorylation of FGF14 by CK2 at S228 and S230, and mutation to alanine at these sites modified FGF14 modulation of Nav1.6-mediated currents. In 1 d in vitro hippocampal neurons, TBB induced a reduction in FGF14 expression, a decrease in transient Na+ current amplitude, and a hyperpolarizing shift in the voltage dependence of Nav channel steady-state inactivation. In mature neurons, TBB reduces the axodendritic polarity of FGF14. In cornu ammonis area 1 hippocampal slices from wild-type mice, TBB impairs neuronal excitability by increasing action potential threshold and lowering firing frequency. Importantly, these changes in excitability are recapitulated in Fgf14−/− mice, and deletion of Fgf14 occludes TBB-dependent phenotypes observed in wild-type mice. These results suggest that a CK2-FGF14 axis may regulate Nav channels and neuronal excitability.—Hsu, W.-C. J., Scala, F., Nenov, M. N., Wildburger, N. C., Elferink, H., Singh, A. K., Chesson, C. B., Buzhdygan, T., Sohail, M., Shavkunov, A. S., Panova, N. I., Nilsson, C. L., Rudra, J. S., Lichti, C. F., Laezza, F. CK2 activity is required for the interaction of FGF14 with voltage-gated sodium channels and neuronal excitability. PMID:26917740
Planskoy, B; Tapper, P D; Bedford, A M; Davis, F M
1996-11-01
Part II of this paper gives the results of applying the TBI methods described in part I, to in vivo patient planning and dosimetry. Patients are planned on nine CT based body slices, five of which pass through the lungs. Planned doses are verified with ten silicon diodes applied bi-laterally to five body sites, at each treatment. LiF TLDs are applied to seven other body sites at the first treatment only. For 84 patients and at least 1016 measurements per body site with the diodes, the mean measured total doses agreed with planned doses within at most 2% except at lung levels, where the mean measured dose was 3% too low. Standard deviations of the measurements about the mean were between 2.4 and 3.1%. For the LiF TLDs, the mean measured doses for all seven body sites were with in +/- 5% of planned doses. A separate assessment of measured entrance and transmitted doses showed that the former agreed well with planned doses, but that the latter tended to be low, especially over the lungs, and that they had a wider dispersion. Possible reasons for this are discussed. These results show measurement uncertainties similar to those for non-TBI treatments of Nilsson et al, Leunens et al and Essers et al. An analysis of the treatment plans showed a mean dose inhomogeneity in the body (75 patients, nine slices) of 19 +/- 6.0% (1 s.d.) and in the lungs (40 patients, five slices) of 9.2 +/- 2.85% (1 s.d.). The conclusions are that, overall, the methods are reasonably satisfactory but that, with an extra effort, even closer agreement between measured and planned doses and a further limited reduction in the body dose inhomogeneity could be obtained. However, if it were thought desirable to make a substantial reduction in the dose inhomogeneity in the body and lungs, this could only be achieved with the available equipment by changing from lateral to anterior-posterior irradiation and any potential advantages of this change would have to be balanced against a likely deterioration in patient comfort and an increase in treatment set-up times.
NASA Astrophysics Data System (ADS)
Shi, Yue
2017-03-01
Background: Recent years have seen considerable effort in associating the shell evolution (SE) for a chain of isotones or isotopes with the underlying nuclear interactions. In particular, it has been fairly well established that the tensor part of the Skyrme interaction is indispensable for understanding certain SE above Z ,N =50 shell closures, as a function of nucleon numbers. Purpose: The purpose of the present work is twofold: (1) to study the effect of deformation due to blocking on the SE above Z ,N =50 shell closures and (2) to examine the optimal parametrizations in the tensor part which gives a proper description of the SE above Z ,N =50 shell closures. Methods: I use the Skyrme-Hartree-Fock-Bogoliubov (SHFB) method to compute the even-even vacua of the Z =50 isotopes and N =50 isotones. For Sb and odd-A Sn isotopes, I perform calculations with a blocking procedure which accounts for the polarization effects, including deformations. Results: The blocking SHFB calculations show that the light odd-A Sb isotopes, with only one valence proton occupying down-sloping Ω =11 /2- and Ω =7 /2+ Nilsson orbits, assume finite oblate deformations. This reduces the energy differences between 11 /2- and 7 /2+ states by about 500 keV for 51Sb56 -66 , bringing the energy-difference curve closer to the experimental one. With une2t1 energy density functional (EDF), which differs from unedf2 parametrization by tensor terms, a better description of the slope of Δ e (π 1 h11 /2-π 1 g7 /2) as a function of neutron number has been obtained. However, the trend of Δ e (π 1 g7 /2-π 2 d5 /2) curve is worse using une2t1 EDF. Δ e (ν 3 s1 /2-ν 2 d5 /2) and Δ e (ν 1 g7 /2-ν 2 d5 /2) curve for N =50 isotones using une2t1 seems to be consistent with experimental data. The neutron SE of Δ e (ν 1 h11 /2-ν 1 g7 /2) and Δ e (ν 1 g7 /2-ν 2 d5 /2) for Sn isotopes are shown to be sensive to αT tensor parameter. Conclusions: Within the Skyrme self-consistent mean-field model, the deformation degree of freedom has to be taken into account for Sb isotopes, N =51 isotones, and odd-A Sn isotopes when discussing variation of quantities like shell gap etc. The tensor terms are important for describing the strong variation of Δ E (Ωπ=11 /2--7 /2+) in Sb isotopes. The SE of 1 /2+ and 7 /2+ states in N =51 isotones may show signature for the existence of tensor interaction. The experimental excitation energies of 11 /2- and 7 /2+ states in odd-A Sn isotopes close to 132Sn give prospects for constraining the αT parameter.
NASA Astrophysics Data System (ADS)
Serk, Henrik; Nilsson, Mats; Schleucher, Jurgen
2017-04-01
Peatlands store >25% of the global soil C pool, corresponding to 1/3 of the contemporary CO2-C in the atmosphere. The majority of the accumulated peat is made up by remains of Sphagnum peat mosses. Thus, understanding how various Sphagnum functional groups respond, and have responded, to increasing atmospheric CO2 and temperature constitutes a major challenge for our understanding of the role of peatlands under a changing climate. We have recently demonstrated (Ehlers et al., 2015, PNAS) that the abundance ratio of two deuterium isotopomers (molecules carrying D at specific intramolecular positions, here D6R/S) of photosynthetic glucose reflects the ratio of oxygenation to carboxylation metabolic fluxes at Rubisco. The photosynthetic glucose is prepared from various plant carbohydrates including cellulose. This finding has been established in CO2 manipulation experiments and observed in carbohydrate derived glucose isolated from herbarium samples of all investigated C-3 species. The isotopomer ratio is connected to specific enzymatic processes thus allowing for mechanistic implicit interpretations. Here we demonstrate a clear increase in net photosynthesis of Sphagnum fuscum in response to the increase of 100 ppm CO2 during the last century as deduced from analysis on S. fuscum remains from peat cores. The D6R/S ratio declines from bottom to top in peat cores, indicating CO2-driven reduction of photorespiration in contemporary moss biomass. In contrast to the hummock-forming S. fuscum, hollow-growing species, e.g. S. majus did not show this response or gave significantly weaker response, suggesting important ecological consequences of rising CO2 on peatland ecosystem services. We hypothesize that photosynthesis in hollow-growing species under water saturation is fully or partly disconnected from the atmospheric CO2 partial pressure and thus showing weaker or no response to increased atmospheric CO2. To further test the field observations we grow both hummock and hollow Sphagnum species in controlled green-house experiments under varying combinations of water table, CO2 and temperature. Preliminary results confirm our interpretations of data from field peat cores. Ehlers, I., Augusti, A., Betson, T.R., Nilsson, M.B., Marshall, J.D. and J. Schleucher (2015) Detecting long-term metabolic shifts using isotopomers: CO2-driven suppression of photorespiration in C3 plants over the 20th century, Proceedings National Academy of Sciences (PNAS), doi: 10.1073/pnas.1504493112
NASA Astrophysics Data System (ADS)
Planskoy, B.; Tapper, P. D.; Bedford, A. M.; Davis, F. M.
1996-11-01
Part II of this paper gives the results of applying the TBI methods described in part I, to in vivo patient planning and dosimetry. Patients are planned on nine CT based body slices, five of which pass through the lungs. Planned doses are verified with ten silicon diodes applied bi-laterally to five body sites, at each treatment. LiF TLDs are applied to seven other body sites at the first treatment only. For 84 patients and at least 1016 measurements per body site with the diodes, the mean measured total doses agreed with planned doses within at most 2% except at lung levels, where the mean measured dose was 3% too low. Standard deviations of the measurements about the mean were between 2.4 and 3.1%. For the LiF TLDs, the mean measured doses for all seven body sites were within
of planned doses. A separate assessment of measured entrance and transmitted doses showed that the former agreed well with planned doses, but that the latter tended to be low, especially over the lungs, and that they had a wider dispersion. Possible reasons for this are discussed. These results show measurement uncertainties similar to those for non-TBI treatments of Nilsson et al, Leunens et al and Essers et al. An analysis of the treatment plans showed a mean dose inhomogeneity in the body (75 patients, nine slices) of
(1 s.d.) and in the lungs (40 patients, five slices) of
(1 s.d.). The conclusions are that, overall, the methods are reasonably satisfactory but that, with an extra effort, even closer agreement between measured and planned doses and a further limited reduction in the body dose inhomogeneity could be obtained. However, if it were thought desirable to make a substantial reduction in the dose inhomogeneity in the body and lungs, this could only be achieved with the available equipment by changing from lateral to anterior - posterior irradiation and any potential advantages of this change would have to be balanced against a likely deterioration in patient comfort and an increase in treatment set-up times.
Experiences of Sexuality Six Years After Stroke: A Qualitative Study.
Nilsson, Marie I; Fugl-Meyer, Kerstin; von Koch, Lena; Ytterberg, Charlotte
2017-06-01
Little is known about the long-term consequences of stroke on sexuality, and studies on how individuals with stroke communicate with health care professionals about information and/or interventions on sexuality are even sparser. To explore experiences of sexuality 6 years after stroke, including communication with health care professionals concerning sexuality. This qualitative study was based on data collected by semistructured interviews with 12 informants 43 to 81 years old 6 years after stroke. Interviews were recorded and transcribed verbatim and thematic analysis was performed. The analysis resulted in the following three themes. Not exclusively negative experiences in sexuality after stroke: Most informants experienced some change in their sexual life from before their stroke. Decreased sexual interest and function were ascribed to decreased sensibility, post-stroke pain, or fatigue. Some informants reported positive changes in sexuality, which were attributed to feelings of increased intimacy. Individual differences and variability on how to handle sexuality after stroke: Different strategies were used to manage unwanted negative changes such as actively trying to adapt by planning time with the partner and decreasing pressure or stress. Open communication about sexuality with one's partner also was described as important. Strikingly, most informants with negative experiences of sexual life attributed these to age or a stage in life and not to the stroke or health issues. Furthermore, they compared themselves with others without stroke but with changes in sexuality, thus achieving a sense of normality. Communication and counseling concerning sexuality-many unmet needs: Experiences of communication with health care professionals varied. Very few informants had received any information or discussed sexuality with health care professionals during the 6 years since the stroke, although such needs were identified by most informants. When encountering individuals with previous stroke, there is a need for vigilance concerning individual experiences of stroke on sexuality to avoid under- or overestimating the impact and to raise the subject, which currently might be seldom. Individuals with long-term diverse consequences of stroke and with different sociodemographic backgrounds were interviewed. Because most individuals in the present study had retained functioning, this could decrease transferability to populations with more severe sequelae after stroke. The individuals in the present study had different experiences of sexuality after stroke. The results point to the importance of acknowledging sexual rehabilitation as part of holistic person-centered stroke rehabilitation. Nilsson MI, Fugl-Meyer K, von Koch L, Ytterberg C. Experiences of Sexuality Six Years After Stroke: A Qualitative Study. J Sex Med 2017;14:797-803. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Meyer, Stefan; Tulej, Marek; Wurz, Peter
2017-04-01
The exploration of habitable environments around the gas giants in the Solar System is of major interest in upcoming planetary missions. Exactly this theme is addressed by the Jupiter Icy Moons Explorer (JUICE) mission of ESA, which will characterise Ganymede, Europa and Callisto as planetary objects and potential habitats [1], [2]. We developed a prototype of the Neutral gas and Ion Mass spectrometer (NIM) of the Particle Environment Package (PEP) for the JUICE mission intended for composition measurements of neutral gas and thermal plasma [3]. NIM/PEP will be used to measure the chemical composition of the exospheres of the icy Jovian moons. Besides direct ion measurement, the NIM instrument is able to measure the inflowing neutral gas in two different modes: in neutral mode the gas enters directly the ion source (open source) and in thermal mode, the gas gets thermally accommodated to wall temperature by several collisions inside an equilibrium sphere before entering the ion source (closed source). We started the development of NIM with detailed ion-optical simulations and optimisations using SIMION software. Based on the ion-optical design we developed a prototype of NIM with several iterations. We tested the prototype NIM under realistic mission conditions and thereby successfully verified its required functionality. We will present the development process from ion-optical simulation up to NIM prototype test results and the concluded flight like design. Furthermore, we will provide an insight into the working principle of NIM and its performance, based on measurement data. References: 1) ESA, "JUICE assessment study report (Yellow Book)", ESA/SRE(2011)18, 2012. 2) O. Grasset, M.K. Dougherty, A. Coustenis, E.J. Bunce, C. Erd, D. Titov, M. Blanc, A. Coates, P. Drossart, L.N. Fletcher, H. Hussmann, R. Jaumann, N. Krupp, J.-P. Lebreton, O. Prieto-Ballesteros, P. Tortora, F. Tosi, T. Van Hoolst, "JUpiter Icy moons Explorer (JUICE): An ESA mission to orbit Ganymede and to characterise the Jupiter system", Planet. Space Sci., 2013, 78, pp. 1 - 21. 3) S. Barabash, P. Wurz, P. Brandt, M. Wieser, M. Holmström, Y. Futaana, G. Stenberg, H. Nilsson, A. Eriksson, M. Tulej, A. Vorburger, N. Thomas, C. Paranicas, D.G. Mitchell, G. Ho, B.H. Mauk, D. Haggerty, J.H. Westlake, M. Fränz, N. Krupp, E. Roussos, E. Kallio, W. Schmidt, K. Szego, S. Szalai, K. Khurana, Xianzhe Jia, C. Paty, R.F. Wimmer-Schweingruber, B. Heber, K. Asamura, M. Grande, H. Lammer, T. Zhang, S. McKenna-Lawlor, S.M. Krimigis, T. Sarris, and D. Grodent, "Particle Environment Package (PEP)," proceedings of the European Planetary Science Congress, 8 (2013), EPSC2013-709.
Lunde, Pernille; Nilsson, Birgitta Blakstad; Bergland, Astrid; Kværner, Kari Jorunn; Bye, Asta
2018-05-04
Noncommunicable diseases (NCDs) account for 70% of all deaths in a year globally. The four main NCDs are cardiovascular diseases, cancers, chronic pulmonary diseases, and diabetes mellitus. Fifty percent of persons with NCD do not adhere to prescribed treatment; in fact, adherence to lifestyle interventions is especially considered as a major challenge. Smartphone apps permit structured monitoring of health parameters, as well as the opportunity to receive feedback. The aim of this study was to review and assess the effectiveness of app-based interventions, lasting at least 3 months, to promote lifestyle changes in patients with NCDs. In February 2017, a literature search in five databases (EMBASE, MEDLINE, CINAHL, Academic Research Premier, and Cochrane Reviews and Trials) was conducted. Inclusion criteria was quantitative study designs including randomized and nonrandomized controlled trials that included patients aged 18 years and older diagnosed with any of the four main NCDs. Lifestyle outcomes were physical activity, physical fitness, modification of dietary habits, and quality of life. All included studies were assessed for risk of bias using the Cochrane Collaboration`s risk of bias tool. Meta-analyses were conducted for one of the outcomes (glycated hemoglobin, HbA 1c ) by using the estimate of effect of mean post treatment with SD or CI. Heterogeneity was tested using the I 2 test. All studies included in the meta-analyses were graded. Of the 1588 records examined, 9 met the predefined criteria. Seven studies included diabetes patients only, one study included heart patients only, and another study included both diabetes and heart patients. Statistical significant effect was shown in HbA 1c in 5 of 8 studies, as well in body weight in one of 5 studies and in waist circumference in one of 3 studies evaluating these outcomes. Seven of the included studies were included in the meta-analyses and demonstrated significantly overall effect on HbA 1c on a short term (3-6 months; P=.02) with low heterogeneity (I 2 =41%). In the long term (10-12 months), the overall effect on HbA 1c was statistical significant (P=.009) and without heterogeneity (I 2 =0%). The quality of evidence according to Grading of Recommendations Assessment, Development and Evaluation was low for short term and moderate for long term. Our review demonstrated limited research of the use of smartphone apps for NCDs other than diabetes with a follow-up of at least 3 months. For diabetes, the use of apps seems to improve lifestyle factors, especially to decrease HbA 1c . More research with long-term follow-up should be performed to assess the effect of smartphone apps for NCDs other than diabetes. ©Pernille Lunde, Birgitta Blakstad Nilsson, Astrid Bergland, Kari Jorunn Kværner, Asta Bye. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.05.2018.
PREFACE: Particles and Fields: Classical and Quantum
NASA Astrophysics Data System (ADS)
Asorey, M.; Clemente-Gallardo, J.; Marmo, G.
2007-07-01
This volume contains some of the contributions to the Conference Particles and Fields: Classical and Quantum, which was held at Jaca (Spain) in September 2006 to honour George Sudarshan on his 75th birthday. Former and current students, associates and friends came to Jaca to share a few wonderful days with George and his family and to present some contributions of their present work as influenced by George's impressive achievements. This book summarizes those scientific contributions which are presented as a modest homage to the master, collaborator and friend. At the social ceremonies various speakers were able to recall instances of his life-long activity in India, the United States and Europe, adding colourful remarks on the friendly and intense atmosphere which surrounded those collaborations, some of which continued for several decades. This meeting would not have been possible without the financial support of several institutions. We are deeply indebted to Universidad de Zaragoza, Ministerio de Educación y Ciencia de España (CICYT), Departamento de Ciencia, Tecnología y Universidad del Gobierno de Aragón, Universitá di Napoli 'Federico II' and Istituto Nazionale di Fisica Nucleare. Finally, we would like to thank the participants, and particularly George's family, for their contribution to the wonderful atmosphere achieved during the Conference. We would like also to acknowledge the authors of the papers collected in the present volume, the members of the Scientific Committee for their guidance and support and the referees for their generous work. M Asorey, J Clemente-Gallardo and G Marmo The Local Organizing Committee
George Sudarshan
| A. Ashtekhar (Pennsylvania State University, USA) |
| L. J. Boya (Universidad de Zaragoza, Spain) |
| I. Cirac (Max Planck Institute, Garching, Germany) |
| G. F. Dell Antonio (Universitá di Roma La Sapienza, Italy) |
| A. Galindo (Universidad Complutense de Madrid, Spain) |
| S. L. Glashow (Boston University, USA) |
| A. M. Gleeson (University of Texas, Austin, USA) |
| C. R. Hagen (Rochester University, NY, USA) |
| J. Klauder (University of Florida, Gainesville, USA) |
| A. Kossakowski (University of Torun, Poland) |
| V.I. Manko (Lebedev Physical Institute, Moscow, Russia) |
| G. Marmo (Universitá Federico II di Napoli e INFN Sezione di Napoli, Italy) |
| N. Mukunda (Indian Institute of Science, Bangalore, India) |
| J. V. Narlikar (Inter-University Centre for Astronomy and Astrophysics, Pune, India) |
| J. Nilsson (University of Goteborg, Sweden) |
| S. Okubo (Rochester University, NY, USA) |
| T. Regge (Politecnico di Torino, Italy) |
| W. Schleich (University of Ulm, Germany) |
| M. Scully (Texas A& M University, USA) |
| S. Weinberg (University of Texas, Austin, USA) |
| M. Asorey (Universidad de Zaragoza, Spain) |
| L. J. Boya (Universidad de Zaragoza, Spain). Co-Chair |
| J. F. Cariñena (Universidad de Zaragoza, Spain) |
| J. Clemente-Gallardo (Universidad de Zaragoza, Spain) |
| F. Falceto (Universidad de Zaragoza, Spain) |
| G. Marmo (Universitá Federico II di Napoli e INFN Sezione di Napoli, Italy) Co-Chair |
| G. Morandi (Universitá di Bologna, Italy) |
| ACHARYA, Raghunath: Arizona State University, USA |
| AGUADO, Miguel M.: Max-Planck-Institut für Quantenoptik, Garching, Germany |
| ASOREY, Manuel: Universidad de Zaragoza, Spain |
| BERETTA, Gian Paolo: Università di Brescia, Italy |
| BHAMATHI, Gopalakrishnan: University of Texas at Austin, USA |
| BOYA, Luis Joaquín: Universidad de Zaragoza, Spain |
| CARIÑENA, José F.: Universidad de Zaragoza, Spain |
| CELEGHINI, Enrico: Università di Firenze & INFN, Italy |
| CHRUSCINSKI, Dariusz: Nicolaus Copernicus University, Torun, Poland |
| CIRILO-LOMBARDO, Diego: Bogoliubov Laboratory of Theoretical Physics (JINR-Dubna), Russia |
| CLEMENTE-GALLARDO, Jesus: BIFI-Universidad de Zaragoza, Spain |
| DE LUCAS, Javier: Universidad de Zaragoza, Spain |
| FALCETO, Fernando: Universidad de Zaragoza, Spain |
| GINOCCHIO, Joseph: Los Alamos National Laboratory, USA |
| GORINI, Vittorio: Universitá' dell' Insubria, Como, Italy |
| INDURAIN, Javier: Universidad de Zaragoza, Spain |
| KLAUDER, John: University of Florida, USA |
| KOSSAKOWSKI, Andrzej: Nicolaus Copernicus University, Torun, Poland |
| MARMO, Giuseppe: Università di Napoli Federico II, Italy |
| MORANDI, Giuseppe: Universitá di Bologna-Italy |
| MUKUNDA, Narasimhaiengar: Indian Institute of Science, Bangalore, India |
| MUÑOZ-CASTAÑEDA, Jose M.: University of Zaragoza, Spain |
| NAIR, RANJIT: Centre for Philosophy & Foundations of Science, New Delhi, India |
| NILSSON, Jan S: University of Gothenburg, Sweden |
| OKUBO, Susumu: University of Rochester, USA |
| PASCAZIO, Saverio: Universitá di Bari, Italy |
| RIVERA HERNÁNDEZ, Rayito: Université Pierre et Marie Curie, Paris, France |
| RODRIGUEZ, Cesar: University of Texas - Austin, USA |
| SCOLARICI, Giuseppe: Universitá del Salento, Lecce, Italy |
| SEGUI, Antonio: Universidad de Zaragoza, Spain |
| SHAPIRO, Ilya: Universidade Federal de Juiz de Fora, Brasil |
| SIMONI, Alberto: Università di Napoli Federico II, Italy |
| SOLOMON, Allan: Open University/ University of Paris VI, UK/France |
| SUDARSHAN, Ashok: |
| SUDARSHAN, George: University of Texas at Austin, USA |
| TULCZYJEW, Wlodzimierz: Universitá di Camerino, Italy |
| UCHIYAMA, Chikako: University of Yamanashi, Japan |
| VENTRIGLIA, Franco: Università di Napoli Federico II, Italy |
| VILASI, Gaetano: Universitá di Salerno, Italy |
| ZACCARIA, Francesco: Universitá di Napoli Federico II, Italy |
PREFACE: Fourth International Workshop on Inelastic Ion-Surface Collisions
NASA Astrophysics Data System (ADS)
Sigmund, Peter
1983-01-01
The Fourth International Workshop on Inelastic Ion-Surface Collisions was held at Hindsgavl Manor near Middelfart, Denmark from 21 to 24 September 1982, following previous workshops held in Murray Hill, New Jersey (1976), Hamilton, Ontario (1978) and Feldkirchen-Westerham, Bavaria (1980). Like in the previous meetings, the underlying idea was to gather a moderately small group of researchers to discuss fundamental physical and chemical problems in a number of areas that are related, but are normally represented at separate conferences focusing on different aspects. The area of inelastic ion-surface collisions has a wide diversity of applications ranging from surface analysis by particle impact through microelectronic and controlled thermonuclear fusion devices to biomolecule identification and solar wind effects in planetary space. There are strong links to surface science and atomic collision physics and their respective applications. The present series of workshops is an attempt to focus on fundamental problems common to all these areas and thus to provide a forum for fruitful interaction. At Middelfart, we were lucky to have an exceptional number of well-presented and stimulating summary talks covering a rather broad range of fundamental processes with the emphasis shifting back and forth between collisional and surface aspects. Moreover, there was a wealth of short contributions on current research, of which many were submitted to the present proceedings. Thanks to the speakers, an active audience, and considerate session chairmen, we had extensive and lively but friendly discussions in an always stimulating atmosphere. This volume contains 11 of 13 invited papers and 15 of the 30 contributions presented orally at the workshop. It should, like the proceedings of the previous workshops, give a balanced survey of the current status of the field, with a slight bias toward recent developments like those in the theory of charge states of sputtered atoms, and others. All papers have undergone a normal, and occasionally extensive, refereeing procedure. In the midst of the editing process, I received the news that one of the invited speakers, Morton Traum of Bell Laboratories, had died at age 41 on 1 December, 1982 in Stoughton, Wisconsin. Mort had delivered a superb talk on Desorption and Sputtering by Electronic Processes and had been one of the most active participants and perhaps the most broadly oriented one of the workshop. His intense curiosity and serene charm, combined with a solid background in all parts of surface science, contributed stimulating ideas to most of the topics discussed. In preparing the workshop, I got much useful advice and constructive criticism from the members of the international committee. The assistance of the members of the local committee, Nils Andersen, Flemming Besenbacher, Jens Nørskov and Jens Onsgaard, as well as Erling Hartmann, Tove Nyberg and my wife Pia was instrumental at various stages before, during and after the workshop. Generous funding was received from the Office of Naval Research, the Danish Natural Science Research Council, NORDITA, the Nordic Accelerator Committee, the Research Foundation of Odense University and the Danish Provincial Bank. It is a pleasure to acknowledge the professional service of the Hindsgavl Conference Center and the smooth cooperation with Dr N R Nilsson, executive editor of Physica Scripta.
Secondary organic aerosols - formation and ageing studies in the SAPHIR chamber
NASA Astrophysics Data System (ADS)
Spindler, Christian; Müller, Lars; Trimborn, Achim; Mentel, Thomas; Hoffmann, Thorsten
2010-05-01
Secondary organic aerosol (SOA) formation from oxidation products of biogenic volatile organic compounds (BVOC) constitutes an important coupling between vegetation, atmospheric chemistry, and climate change. Such secondary organic aerosol components play an important role in particle formation in Boreal regions ((Laaksonen et al., 2008)), where biogenic secondary organic aerosols contribute to an overall negative radiative forcing, thus a negative feed back between vegetation and climate warming (Spracklen et al., 2008). Within the EUCAARI project we investigated SOA formation from mixtures of monoterpenes (and sesquiterpenes) as emitted typically from Boreal tree species in Southern Finland. The experiments were performed in the large photochemical reactor SAPHIR in Juelich at natural light and oxidant levels. Oxidation of the BVOC mixtures and SOA formation was induced by OH radicals and O3. The SOA was formed on the first day and then aged for another day. The resulting SOA was characterized by HR-ToF-AMS, APCI-MS, and filter samples with subsequent H-NMR, GC-MS and HPLC-MS analysis. The chemical evolution of the SOA is characterized by a fast increase of the O/C ratio during the formation process on the first day, stable O/C ratio during night, and a distinctive increase of O/C ratio at the second day. The increase of the O/C ratio on the second day is highly correlated to the OH dose and is accompanied by condensational growth of the particles. We will present simultaneous factor analysis of AMS times series (PMF, Ulbrich et al., 2009 ) and direct measurements of individual chemical species. We found that four factors were needed to represent the time evolution of the SOA composition (in the mass spectra) if oxidation by OH plays a mayor role. Corresponding to these factors we observed individual, representative molecules with very similar time behaviour. The correlation between tracers and AMS factors is astonishingly good as the molecular tracers represented only a very small mass fraction of the factors. There is indication that some factors grow at the cost of the other suggesting a set of successive generations of oxidation products. This conversion could proceed either by direct condensed phase processes or by an evaporation-oxidation-recondensation mechanism. On the other hand it seems that the factors evolve in parallel, representing products of multiple oxidation which appear on different time scales in the particulate phase. These findings will be discussed with respect to their importance for ageing processes of atmospheric organic aerosols. References Laaksonen, A., Kulmala, M., O'Dowd, C. D., Joutsensaari, J., Vaattovaara, P., Mikkonen, S., Lehtinen, K. E. J., Sogacheva, L., Dal Maso, M., Aalto, P., Petaja, T., Sogachev, A., Yoon, Y. J., Lihavainen, H., Nilsson, D., Facchini, M. C., Cavalli, F., Fuzzi, S., Hoffmann, T., Arnold, F., Hanke, M., Sellegri, K., Umann, B., Junkermann, W., Coe, H., Allan, J. D., Alfarra, M. R., Worsnop, D. R., Riekkola, M. L., Hyotylainen, T., and Viisanen, Y.: The role of VOC oxidation products in continental new particle formation, Atmospheric Chemistry and Physics, 8, 2657-2665, 2008 Spracklen, D. V., Bonn, B., and Carslaw, K. S.: Boreal forests, aerosols and the impacts on clouds and climate, Philosophical Transactions of the Royal Society a-Mathematical Physical and Engineering Sciences, 366, 4613-4626, 10.1098/rsta.2008.0201, 2008 Ulbrich, I. M., Canagaratna, M. R., Zhang, Q., Worsnop, D. R., and Jimenez, J. L.: Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric data, Atmospheric Chemistry and Physics, 9, 2891-2918, 2009
What Risk? (edited by Roger Bate)
NASA Astrophysics Data System (ADS)
Behrman, E. J.
1999-07-01
Roger Bate, Ed. Butterworth-Heinemann: Oxford, UK. 329 pp. Cloth (1997): ISBN 0-7506-3810-9. 56.95. Paper (1999): ISBN 0 7506 4228 9. 29.95. A train carrying radioactive waste had begun its trip in New York and was close to its destination in California. As it stopped, the engineer called to a bystander, "Congratulations." "What for?" said the man. "You get to die. We calculated that each person along the route would receive one-millionth of the lethal dose of radioactivity. No one has died yet and you are the millionth person." "But I have received only one-millionth of the lethal dose." "That doesn't matter, it's a question of statistics." (This story is paraphrased from Rockwell's piece in The Scientist, March 16, 1998, p 7.) What Risk? contains 15 chapters (by 19 authors) arranged in five categories: methodology, science, science policy, commentaries, and perception. It deals in different ways, broadly speaking, with the problems raised by this anecdote. It would make a splendid textbook for high-school students or college undergraduates for a course dealing with pitfalls in extrapolation, unexpected variables, the proper use of statistics, political correctness and absolute safety, evaluation of the scientific literature, and the interplay of science and politics. Each article has an extensive reference list. Among the specific risks discussed are asbestos, benzene, environmental (secondhand) tobacco smoke, dioxin, ionizing radiation, and carcinogens. Some general principles emerge. (i) Since all organisms have repair mechanisms against environmental damage, there are thresholds for all damaging agents. Therefore, extrapolation from high dose rates to very low levels does not make sense. (ii) Doses and dose rates should not be confused. (iii) There are very large species differences in response to damaging agents. (iv) Unrecognized variables lurk everywhere. (v) The costs of enforcing demonstrably false standards are huge. Here are some illustrations. Nilsson's article on environmental tobacco smoke (ETS) concludes that the dangers are about one order of magnitude less than those currently used for regulatory purposes. The errors arise from misclassification of smoking status, inappropriate controls, confounding factors having to do with lifestyle, and, possibly, heredity. Looked at another way, a child's intake of benzo[a]pyrene during 10 hours from ETS is estimated to be about 250 times less than the amount ingested from eating one grilled sausage. Munby and Weetman's article on benzene and leukemia concludes that the risk of leukemia from nonindustrial exposure is probably zero. The slope of the hypothetically linear dose-effect curve currently in use is too large, the effect at low doses is overestimated, and the linear extrapolation to zero is not justified. The current standard for air quality is about six orders of magnitude below human toxicity levels. Ames and Gold, in the chapter Pollution, Pesticides and Cancer Misconceptions, give a fine summary of the difficulties with animal cancer tests. "Rodent carcinogens are not rare. Half of all chemicals tested in standard high dose animal cancer tests, whether occurring naturally or produced synthetically, are 'carcinogens'. There are high dose effects in these rodent cancer tests that are not relevant to low dose human exposures... Though 99.9 percent of the chemicals humans ingest are natural, the focus of regulatory policy is on synthetic chemicals." For example, more than 1000 chemicals have been identified in coffee: 27 have been tested and 19 are rodent carcinogens at the high levels at which these tests are carried out. Dioxin has been called the most toxic chemical known to man. Máller shows that this is not true by any measure. Part of the confusion is based on the fact that guinea pigs are killed by doses thousands of times less than those which affect humans. The chief symptom of dioxin exposure in humans is acne. The chapter that most surprised me was that by Jaworowski on ionizing radiation. First, the extrapolation of data on the survivors of the Hiroshima and Nagasaki bombings involves dose rates on the order of 5000 mSv/year. For these dose rates, the effects are well established. The average natural dose rate (from the unperturbed environment) is about 2.4 mSv/year. Average additional levels resulting from the Chernobyl accident in Central Europe were about 0.01 mSv/year. So, are there measurable effects at these low dose rates? The linear extrapolation model says yes. But there is no evidence to support this model. Indeed, the author refers to a large body of literature (more than 1000 publications) which is said to show that not only are these low dose rates not harmful, but they are actually beneficial. Examples: people in houses with higher than average radon levels show a lower mortality from lung cancer. The number of birth defects in Hungary in the two years following Chernobyl was smaller than in the years preceding it. At low dose rates, the incidence of neoplasms in irradiated mice is lower than in nonirradiated controls. There are other examples. This literature should be critically examined. Then there is the question of cost. Funds are limited. Are we spending our money wisely? Ames and Gold give some numbers that suggest not. The average toxin control program costs 60 times more per life-year saved than an injury prevention program and 150 times more than a health care program. Chemical educators could do much for humanity by encouraging study of the material in this book.
FOREWORD: Workshop on "Very Hot Astrophysical Plasmas"
NASA Astrophysics Data System (ADS)
Koch-Miramond, Lydie; Montemerie, Thierry
1984-01-01
A Workshop on "Very Hot Astrophysical Plasmas" was held in Nice, France, on 8-10 November 1982. Dedicated mostly to theoretical, observational, and experimental aspects of X-ray astronomy and related atomic physics, it was the first of its kind to be held in France. The Workshop was "European" in the sense that one of its goals (apart from pure science) was to gather the European astronomical community in view of the forthcoming presentation of the "X-80" project for final selection to be the next scientific satellite of the European Space Agency. We now know that the Infrared Space Observatory has been chosen instead, but the recent successful launch of EXOSAT still keeps X-ray astronomy alive, and should be able to transfer, at least for a time, the leadership in this field from the U.S. to Europe, keeping in mind the competitive level of our Japanese colleagues. (With respect to the selection of ISO, one should also keep in mind that observations in the infrared often bring material relevant to the study of X-ray sources!) On a longer time scale, the Workshop also put emphasis on several interesting projects for the late eighties-early nineties, showing the vitality of the field in Europe. Some proposals have already taken a good start, like XMM, the X-ray Multi-Mirror project, selected by ESA last December for an assessment study in 1983. The present proceedings contain most of the papers that were presented at the Workshop. Only the invited papers were presented orally, contributed papers being presented in the form of posters but summarized orally by rapporteurs. To make up this volume, the written versions of these papers were either cross-reviewed by the Invited Speakers, or refereed by the Rapporteurs (for contributed papers) and edited by us, when necessary. Note, however, that the conclusions of the Workshop, which were kindly presented by Richard McCray, have already appeared in the "News and Views" section of Nature (301, 372, 1983). Altogether, the present proceedings aim at giving an up-to-date overview of X-ray astronomy, and may be taken also as a kind of "status report" on European projects in the field. As such, it should hopefully be useful to the astronomical community at large. But it is certainly worthwhile to recall that the Workshop (hence, this volume) would not have been possible without the help of many people, especially on location, in the city of Nice. The organizers received a competent and dedicated help from the Observatoire de Nice (interesting absorption effects could be seen while ascending the Mont-Gros in the fog — and also during the lunch under the Grande Coupole!), from the "Mutuelle Générale de l'Education Nationale", which provided a convenient and modern building to hold the Workshop, and from the City of Nice, which arranged a magnificent — if rainy — cocktail party at the Villa Massha. Thanks are also due to all our sponsors for financial help. We want to thank more particularly Pr Raymond Michard, Director of the Observatoire de Nice, and several other people there: Françoise Bely-Dubau, Danièle Benotto, Renata Feldmann, Paul Faucher. In Saclay and during the Workshop, we all appreciated the efficient collaboration of Claudine Belin and Raymonde Boschiero, while after the Workshop, Nils Robert Nilsson was of great help as Manuscript Editor for these proceedings. In spite of the poor weather, already alluded to — and which turned out to be the worst over all France for decades — and thanks to the cooperation of all, we do think it was really...— a Nice Workshop.
EDITORIAL: Nano-enhanced! Nano-enhanced!
NASA Astrophysics Data System (ADS)
Demming, Anna
2010-08-01
In the early 19th century, a series of engineering and scientific breakthroughs by Nicolas Léonard Sadi Carnot, James Watt and many others led to the foundations of thermodynamics and a new pedigree of mechanical designs that reset the standards of engineering efficiency. The result was the industrial revolution. In optical- and electronics- based nanotechnology research, a similarly subtle bargain is being made; we cannot alter the fact that systems have a finite response to external excitations, but what we can do is enhance that response. The promising attributes of ZnO have long been recognised; its large band gap and high exciton binding energy lend it to a number of applications from laser diodes, LEDs, optical waveguides and switches, and acousto-optic applications to sun cream. When this material is grown into nanowires and nanorods, the material gains a whole new dimension, as quantum confinement effects come into play. Discovery of the enhanced radiative recombination, which has potential for exploitation in many optical and opto-electronic applications, drove intensive research into investigating these structures and into finding methods to synthesise them with optimised properties. This research revealed further subtleties in the properties of these materials. One example is the work by researchers in the US reporting synthesis procedures that produced a yield—defined as the weight ratio of ZnO nanowires to the original graphite flakes—of 200%, and which also demonstrated, through photoluminescence analysis of nanowires grown on graphite flakes and substrates, that graphite induces oxygen vacancies during annealing, which enhances the deep-level to near-band-edge emission ratio [1]. Other one-dimensional materials that provide field emission enhancements include carbon nanotubes, and work has been performed to find ways of optimising the emission efficiency from these structures, such as through control of the emitter density [2]. One of the advantages of ZnO nanowires for field emission devices has been greater control over the electronic properties. Alternative morphologies of ZnO nanostructures have also been explored for field emission enhancements, such as urchin structures, which provide field enhancement factors of 1239, but with the additional benefit of greater stability [3]. Theoretical investigations to understand the mechanisms behind these field enhancements have also grown increasingly more sophisticated, through both analytical techniques and finite theorems. Results from a comparison of these two approaches in the form of Mie theory and the finite element method, using a dipole oscillator as the excitation source, were reported recently by researchers from Duke University, USA [4]. The work found excellent agreement in terms of amplitude, plasmon resonance peak position and full width at half-maximum. These field enhancements lend themselves to a range of technological applications, such as the demonstrated potential of plasmonic interactions in DNA sensing arrays [5]. As well as plasmon resonances, Bragg diffraction in nanoparticles also has the potential to provide enhanced system responses. Researchers in Taiwan have shown enhancements in the acceptance angle as well as the photoresponsivity of n-ZnO/p-si photodiodes with the use of a monolayer of silica nanoparticles [6]. In this issue, researchers in Italy and Japan report work on enhancing the cathodoluminescence from SiC-based systems. They investigate the role of a shell of amorphous silica in core/shell 3C-SiC/SiO2 nanowires and observe a shell-induced enhancement of the SiC near-band-edge emission, which is attributed to carrier diffusion from the shell to the core, promoted by the alignment of the SiO2 and SiC bands in a type I quantum well [7]. Their research is another demonstration of how nanostructures provide enhancements to system responses through a wide range of mechanisms, a breadth of creativity that is mirrored in the approaches to investigating and exploiting these structures. References [1] Banerjee D, Lao J Y, Wang D Z, Huang J Y Steeves D, Kimball B and Ren Z F 2004 Nanotechnology 15 4040-9 [2] Nilsson L, Groening O, Emmenegger C, Kuettel O, Schaller E, Schlapbach L, Kind H, Bonard J-M and Kern K 2000Appl. Phys. Lett. 76 2071-3 [3] Jiang H, Hu J, Gu F and Li C 2009 Nanotechnology 20 055706 [4] Khoury C G, Norton S J and Vo-Dinh T 2010 Nanotechnology 21 315203 [5] Le Moal E, Lévéque-Fort S, Potier M-C and Fort E 2009 Nanotechnology 20 225502 [6] Chen C-P, Lin P-H, Chen L-Y, Ke M-Y, Cheng Y-W and Huang J-J 2009 Nanotechnology 20 245204 [7] Fabbri F, Rossi F, Attolini G, Salviati G, Iannotta S, Aversa L, Verucchi R, Nardi M, Fukata N, Dierre B and Sekiguchi T 2010 Nanotechnology 21 345702
Safety issues in cultural heritage management and critical infrastructures management
NASA Astrophysics Data System (ADS)
Soldovieri, Francesco; Masini, Nicola; Alvarez de Buergo, Monica; Dumoulin, Jean
2013-12-01
This special issue is the fourth of its kind in Journal of Geophysics and Engineering , containing studies and applications of geophysical methodologies and sensing technologies for the knowledge, conservation and security of products of human activity ranging from civil infrastructures to built and cultural heritage. The first discussed the application of novel instrumentation, surface and airborne remote sensing techniques, as well as data processing oriented to both detection and characterization of archaeological buried remains and conservation of cultural heritage (Eppelbaum et al 2010). The second stressed the importance of an integrated and multiscale approach for the study and conservation of architectural, archaeological and artistic heritage, from SAR to GPR to imaging based diagnostic techniques (Masini and Soldovieri 2011). The third enlarged the field of analysis to civil engineering structures and infrastructures, providing an overview of the effectiveness and the limitations of single diagnostic techniques, which can be overcome through the integration of different methods and technologies and/or the use of robust and novel data processing techniques (Masini et al 2012). As a whole, the special issue put in evidence the factors that affect the choice of diagnostic strategy, such as the material, the spatial characteristics of the objects or sites, the value of the objects to be investigated (cultural or not), the aim of the investigation (knowledge, conservation, restoration) and the issues to be addressed (monitoring, decay assessment). In order to complete the overview of the application fields of sensing technologies this issue has been dedicated to monitoring of cultural heritage and critical infrastructures to address safety and security issues. Particular attention has been paid to the data processing methods of different sensing techniques, from infrared thermography through GPR to SAR. Cascini et al (2013) present the effectiveness of a remote sensing technique known as synthetic aperture radar at medium (ERS-ENVISAT) and high (COSMO-SkyMed) resolution for the detection and monitoring of facilities (i.e. buildings/infrastructures) in subsiding areas. In this paper, the results are presented with reference to a densely urbanized flat area in southern Italy, so as to show how the appropriate use of DInSAR data at different scales can help in the detection and monitoring of damageable facilities. Battaglini et al (2013) deal with a methodology for accurately estimating the behaviour of a landfill system in terms of biogas release to the atmosphere. In addition, the link between the flux measurements of biogas release and thermal anomalies detected by infrared radiometry is also discussed. The main benefit of the approach presented is a significant increase to the energy recovered from the landfill site by means of an optimal collection of biogas, which implies a reduction of the total anthropogenic methane originated from the disposal of waste. Dumoulin et al (2013) present an interesting technological solution for the thermal monitoring of a bridge deck. The system integrates an uncooled infrared camera with other sensors (i.e. a weather station and a GPS) and the detection of the inner structure of the deck is achieved by pulse phase thermography (PPT) and principal component thermography (PCT) approaches, so that a first characterization of the inner structure of the deck is proposed. Pappalardo et al (2013) show the advanced versions of the BSC-XRF (beam stability controlled—x-ray fluorescence) and PIXE-alpha (particle induced x-ray emission, using low energy alpha particles) portable spectrometers, developed at the Landis laboratory of the LNS-INFN and IBAM-CNR in Catania, Italy. Several analysis results are reviewed for data from various Sicilian sites and recent data, about the Via Capuana settlement in Licodia Eubea, are also presented and discussed for the first time. Drdácký and Slížková (2013) present two methods as peeling tests, also known as the 'Scotch tape' method, and surface water uptake measurements, using a digitized micro-tube for assessing material characteristics and consolidation effects on historic stone and mortar. Both methods are reviewed by pointing out both the advantages and the drawbacks. Solimene et al (2013) present a novel data processing technique based on the inverse electromagnetic scattering for small and weak target detection and localization. They start from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed to detect and localize the weak scatterers. The role of an adequate scattering model is emphasized to drastically improve detection performance in realistic scenarios. Kadioglu et al (2013) deal with the exploitation of ground penetrating radar, enhanced by advanced data processing based on microwave tomography, for the detection and the assessment of structural damage affecting foundation healthiness, of significant relevance for safety management in cultural heritage. An interesting case of the effectiveness of the joint procedure is shown by processing measurements collected during a survey at the Great Mosque of Ilyas Bey, one of the most important cultural heritage features from ancient Miletos-Iona in Soke Aydin, Turkey. Finally, Nordebo et al (2013) provide an interesting analysis of the optimal accuracy and resolution in electrical impedance tomography (EIT), based on the Cramer-Rao lower bound. This study is very important in the set up and analysis of the regularization strategies for the linearized problem at hand. References Battaglini R, Raco B and Scozzari A 2013 Effective monitoring of landfills: flux measurements and thermography enhance efficiency and reduce environmental impact J. Geophys. Eng. 10 064002 Cascini L, Peduto D, Reale D, Arena L, Ferlisi S, Verde S and Fornaro G 2013 Detection and monitoring of facilities exposed to subsidence phenomena via past and current generation SAR sensors J. Geophys. Eng. 10 064001 Drdácký M and Slížková Z 2013 Enhanced affordable methods for assessing material characteristics and consolidation effects on stone and mortar J. Geophys. Eng. 10 064005 Dumoulin J, Crinière A and Averty R 2013 The detection and thermal characterization of the inner structure of the 'Musmeci' bridge deck by infrared thermography monitoring J. Geophys. Eng. 10 064003 Eppelbaum L, Masini N and Soldovieri F 2010 Near surface geophysics for the study and the management of historical resources J. Geophys. Eng. 7 E01 Kadioglu S, Kadioglu Y K, Catapano I and Soldovieri F 2013 Ground penetrating radar and microwave tomography for the safety management of a cultural heritage site: Miletos Ilyas Bey Mosque (Turkey) J. Geophys. Eng. 10 064007 Masini N and Soldovieri F 2011 Integrated non-invasive sensing techniques and geophysical methods for the study and conservation of architectural, archaeological and artistic heritage J. Geophys. Eng. 8 E01 Masini N, Soldovieri F, Alvarez de Buergo M and Dumoulin J 2012 Cultural heritage and civil engineering J. Geophys. Eng. 9 E01 Nordebo S, Gustafsson M, Nilsson B, Sjöden T and Soldovieri F 2013 Fisher information analysis in electrical impedance tomography J. Geophys. Eng. 10 064008 Pappalardo L, Romano F P, Bracchitta D, Massimino A, Palio O and Rizzo F 2013 Obsidian provenance determination using the beam stability controlled BSC-XRF and the PIXE-alpha portable spectrometers of the LANDIS laboratory: the case of the Via Capuana settlement in Licodia Eubea (Sicily) J. Geophys. Eng. 10 064004 Solimene R, Leone G and Dell'Aversano A 2013 MUSIC algorithms for rebar detection J. Geophys. Eng. 10 064006
NASA Astrophysics Data System (ADS)
Dobaczewski, J.; Olbratowski, P.
2005-05-01
We describe the new version (v2.08k) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock or Skyrme-Hartree-Fock-Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. Similarly as in the previous version (v2.08i), all symmetries can be broken, which allows for calculations with angular frequency and angular momentum tilted with respect to the mass distribution. In the new version, three minor errors have been corrected. New Version Program SummaryTitle of program: HFODD; version: 2.08k Catalogue number: ADVA Catalogue number of previous version: ADTO (Comput. Phys. Comm. 158 (2004) 158) Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Does the new version supersede the previous one: yes Computers on which this or another recent version has been tested: SG Power Challenge L, Pentium-II, Pentium-III, AMD-Athlon Operating systems under which the program has been tested: UNIX, LINUX, Windows-2000 Programming language used: Fortran Memory required to execute with typical data: 10M words No. of bits in a word: 64 No. of lines in distributed program, including test data, etc.: 52 631 No. of bytes in distributed program, including test data, etc.: 266 885 Distribution format:tar.gz Nature of physical problem: The nuclear mean-field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean-field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle n-hole) configurations, deformations, excitation energies, or angular momenta. Similar Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic-oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constrains are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in [J. Dobaczewski, J. Dudek, Comput. Phys. Comm. 102 (1997) 166]. Summary of revisions: 1. Incorrect value of the " t" force parameter for SLY5 has been corrected. 2. Opening of an empty file "FILREC" for IWRIRE=-1 has been removed. 3. Call to subroutine "OLSTOR" has been moved before that to "SPZERO". In this way, correct data transferred to "FLISIG", "FLISIM", "FLISIQ" or "FLISIZ" allow for a correct determination of the candidate states for diabatic blocking. These corrections pertain to the user interface of the code and do not affect results performed for forces other than SLY5. Restrictions on the complexity of the problem: The main restriction is the CPU time required for calculations of heavy deformed nuclei and for a given precision required. Pairing correlations are only included for even-even nuclei and conserved simplex symmetry. Unusual features: The user must have access to the NAGLIB subroutine F02AXE or to the LAPACK subroutines ZHPEV or ZHPEVX, which diagonalize complex Hermitian matrices, or provide another subroutine which can perform such a task. The LAPACK subroutines ZHPEV and ZHPEVX can be obtained from the Netlib Repository at University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/cgi-bin/netlibfiles.pl?filename=/lapack/complex16/zhpev.f and http://netlib2.cs.utk.edu/cgi-bin/netlibfiles.pl?filename=/lapack/complex16/zhpevx.f, respectively. The code is written in single-precision for use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Typical running time: One Hartree-Fock iteration for the superdeformed, rotating, parity conserving state of 15266Dy 86 takes about six seconds on the AMD-Athlon 1600+ processor. Starting from the Woods-Saxon wave functions, about fifty iterations are required to obtain the energy converged within the precision of about 0.1 keV. In the case when every value of the angular velocity is converged separately, the complete superdeformed band with precisely determined dynamical moments J can be obtained within forty minutes of CPU on the AMD-Athlon 1600+ processor. This time can be often reduced by a factor of three when a self-consistent solution for a given rotational frequency is used as a starting point for a neighboring rotational frequency. Additional comments: The actual output files obtained during user's test runs may differ from those provided in the distribution file. The differences may occur because various compilers may produce different results in the following aspects: The initial Nilsson spectrum (the starting point of each run) is Kramers degenerate, and thus the diagonalization routine may return the degenerate states in arbitrary order and in arbitrary mixture. For an odd number of particles, one of these states becomes occupied, and the other one is left empty. Therefore, starting points of such runs can widely vary from compiler to compiler, and these differences cannot be controlled. For axial shapes, two quadrupole moments (with respect to two different axes) become very small and their values reflect only a numerical noise. However, depending on which of these two moments is smaller, the intrinsic-frame Euler axes will differ, most often by 180 degrees. Hence, signs of some moments and angular momenta may vary from compiler to compiler, and these differences cannot be controlled. These differences are insignificant. The final energies do not depend on them, although the intermediate results can.
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
1992-12-01
suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model
Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.
NASA Technical Reports Server (NTRS)
Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.
1995-01-01
This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.
ERIC Educational Resources Information Center
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects
ERIC Educational Resources Information Center
Easley, J. A., Jr.
1977-01-01
The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
ERIC Educational Resources Information Center
Thelen, Mark H.; And Others
1977-01-01
Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
NASA Astrophysics Data System (ADS)
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.
Literature review of models on tire-pavement interaction noise
NASA Astrophysics Data System (ADS)
Li, Tan; Burdisso, Ricardo; Sandu, Corina
2018-04-01
Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Expert models and modeling processes associated with a computer-modeling tool
NASA Astrophysics Data System (ADS)
Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.
2006-07-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.
Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis
2017-02-01
Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
ERIC Educational Resources Information Center
Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce
2011-01-01
This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Airborne Wireless Communication Modeling and Analysis with MATLAB
2014-03-27
research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7 2.7. Propagation Modeling : Statistical Models ............................................................8 2.8. Antenna Modeling
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.
Jenness, Samuel M; Goodreau, Steven M; Morris, Martina
2018-04-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks
Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina
2018-01-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
NASA Technical Reports Server (NTRS)
Kral, Linda D.; Ladd, John A.; Mani, Mori
1995-01-01
The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
NASA Astrophysics Data System (ADS)
Justi, Rosária S.; Gilbert, John K.
2002-04-01
In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.
Aspinall, Richard
2004-08-01
This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.
NASA Astrophysics Data System (ADS)
Aktan, Mustafa B.
The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Using the Model Coupling Toolkit to couple earth system models
Warner, J.C.; Perlin, N.; Skyllingstad, E.D.
2008-01-01
Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
2006-03-01
models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Examination of various turbulence models for application in liquid rocket thrust chambers
NASA Technical Reports Server (NTRS)
Hung, R. J.
1991-01-01
There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua
2012-04-01
To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
Evolution of computational models in BioModels Database and the Physiome Model Repository.
Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar
2018-04-12
A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Computational Models for Calcium-Mediated Astrocyte Functions.
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Computational Models for Calcium-Mediated Astrocyte Functions
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517
Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.
2009-01-01
This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Energy modeling. Volume 2: Inventory and details of state energy models
NASA Astrophysics Data System (ADS)
Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.
1981-05-01
An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.
[A review on research of land surface water and heat fluxes].
Sun, Rui; Liu, Changming
2003-03-01
Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Understanding and Predicting Urban Propagation Losses
2009-09-01
6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Comparison of dark energy models after Planck 2015
NASA Astrophysics Data System (ADS)
Xu, Yue-Yao; Zhang, Xin
2016-11-01
We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.
2013-12-01
Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation
NASA Astrophysics Data System (ADS)
Ichwanul Hakim, Teuku Mohd; Arifianto, Ony
2018-04-01
Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
NASA Astrophysics Data System (ADS)
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree
2018-01-01
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.
Hedenstierna, Sofia; Halldin, Peter
2008-04-15
A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...
2018-04-06
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Replicating Health Economic Models: Firm Foundations or a House of Cards?
Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee
2017-11-01
Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Oursland, Mark David
This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.
A physical data model for fields and agents
NASA Astrophysics Data System (ADS)
de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek
2016-04-01
Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
Models in Science Education: Applications of Models in Learning and Teaching Science
ERIC Educational Resources Information Center
Ornek, Funda
2008-01-01
In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Vector models and generalized SYK models
Peng, Cheng
2017-05-23
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Analysis of terahertz dielectric properties of pork tissue
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Xie, Qiaoling; Sun, Ping
2017-10-01
Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2017-06-01
The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.
Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers
NASA Astrophysics Data System (ADS)
Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.
2017-12-01
Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Harrington
2004-10-25
The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu
2016-11-18
The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.
Evaluation of chiller modeling approaches and their usability for fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
PyMT: A Python package for model-coupling in the Earth sciences
NASA Astrophysics Data System (ADS)
Hutton, E.
2016-12-01
The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Downscaling GISS ModelE Boreal Summer Climate over Africa
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew
2015-01-01
The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
The cost of simplifying air travel when modeling disease spread.
Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V
2009-01-01
Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. A. Wasiolek
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle
2011-06-01
Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
Transient PVT measurements and model predictions for vessel heat transfer. Part II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.
2010-07-01
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less
Comparison of chiller models for use in model-based fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya; Haves, Philip
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity
NASA Astrophysics Data System (ADS)
Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján
2017-06-01
It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Comparison of childbirth care models in public hospitals, Brazil.
Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos
2014-04-01
To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
Modeling approaches in avian conservation and the role of field biologists
Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.
2006-01-01
This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.
NASA Astrophysics Data System (ADS)
Rossman, Nathan R.; Zlotnik, Vitaly A.
2013-09-01
Water resources in agriculture-dominated basins of the arid western United States are stressed due to long-term impacts from pumping. A review of 88 regional groundwater-flow modeling applications from seven intensively irrigated western states (Arizona, California, Colorado, Idaho, Kansas, Nebraska and Texas) was conducted to provide hydrogeologists, modelers, water managers, and decision makers insight about past modeling studies that will aid future model development. Groundwater models were classified into three types: resource evaluation models (39 %), which quantify water budgets and act as preliminary models intended to be updated later, or constitute re-calibrations of older models; management/planning models (55 %), used to explore and identify management plans based on the response of the groundwater system to water-development or climate scenarios, sometimes under water-use constraints; and water rights models (7 %), used to make water administration decisions based on model output and to quantify water shortages incurred by water users or climate changes. Results for 27 model characteristics are summarized by state and model type, and important comparisons and contrasts are highlighted. Consideration of modeling uncertainty and the management focus toward sustainability, adaptive management and resilience are discussed, and future modeling recommendations, in light of the reviewed models and other published works, are presented.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
ERIC Educational Resources Information Center
Gerst, Elyssa H.
2017-01-01
The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
ERIC Educational Resources Information Center
Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.
2011-01-01
The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Documenting Models for Interoperability and Reusability ...
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod
CSR Model Implementation from School Stakeholder Perspectives
ERIC Educational Resources Information Center
Herrmann, Suzannah
2006-01-01
Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
[Bone remodeling and modeling/mini-modeling.
Hasegawa, Tomoka; Amizuka, Norio
Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
The cerebro-cerebellum: Could it be loci of forward models?
Ishikawa, Takahiro; Tomatsu, Saeka; Izawa, Jun; Kakei, Shinji
2016-03-01
It is widely accepted that the cerebellum acquires and maintain internal models for motor control. An internal model simulates mapping between a set of causes and effects. There are two candidates of cerebellar internal models, forward models and inverse models. A forward model transforms a motor command into a prediction of the sensory consequences of a movement. In contrast, an inverse model inverts the information flow of the forward model. Despite the clearly different formulations of the two internal models, it is still controversial whether the cerebro-cerebellum, the phylogenetically newer part of the cerebellum, provides inverse models or forward models for voluntary limb movements or other higher brain functions. In this article, we review physiological and morphological evidence that suggests the existence in the cerebro-cerebellum of a forward model for limb movement. We will also discuss how the characteristic input-output organization of the cerebro-cerebellum may contribute to forward models for non-motor higher brain functions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Mechanical model development of rolling bearing-rotor systems: A review
NASA Astrophysics Data System (ADS)
Cao, Hongrui; Niu, Linkai; Xi, Songtao; Chen, Xuefeng
2018-03-01
The rolling bearing rotor (RBR) system is the kernel of many rotating machines, which affects the performance of the whole machine. Over the past decades, extensive research work has been carried out to investigate the dynamic behavior of RBR systems. However, to the best of the authors' knowledge, no comprehensive review on RBR modelling has been reported yet. To address this gap in the literature, this paper reviews and critically discusses the current progress of mechanical model development of RBR systems, and identifies future trends for research. Firstly, five kinds of rolling bearing models, i.e., the lumped-parameter model, the quasi-static model, the quasi-dynamic model, the dynamic model, and the finite element (FE) model are summarized. Then, the coupled modelling between bearing models and various rotor models including De Laval/Jeffcott rotor, rigid rotor, transfer matrix method (TMM) models and FE models are presented. Finally, the paper discusses the key challenges of previous works and provides new insights into understanding of RBR systems for their advanced future engineering applications.
NASA Astrophysics Data System (ADS)
Gouvea, Julia; Passmore, Cynthia
2017-03-01
The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Modeling of near-wall turbulence
NASA Technical Reports Server (NTRS)
Shih, T. H.; Mansour, N. N.
1990-01-01
An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
An Immuno-epidemiological Model of Paratuberculosis
NASA Astrophysics Data System (ADS)
Martcheva, M.
2011-11-01
The primary objective of this article is to introduce an immuno-epidemiological model of paratuberculosis (Johne's disease). To develop the immuno-epidemiological model, we first develop an immunological model and an epidemiological model. Then, we link the two models through time-since-infection structure and parameters of the epidemiological model. We use the nested approach to compose the immuno-epidemiological model. Our immunological model captures the switch between the T-cell immune response and the antibody response in Johne's disease. The epidemiological model is a time-since-infection model and captures the variability of transmission rate and the vertical transmission of the disease. We compute the immune-response-dependent epidemiological reproduction number. Our immuno-epidemiological model can be used for investigation of the impact of the immune response on the epidemiology of Johne's disease.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Application of surface complexation models to anion adsorption by natural materials
USDA-ARS?s Scientific Manuscript database
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
The NASA Marshall engineering thermosphere model
NASA Technical Reports Server (NTRS)
Hickey, Michael Philip
1988-01-01
Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.
Wind turbine model and loop shaping controller design
NASA Astrophysics Data System (ADS)
Gilev, Bogdan
2017-12-01
A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Modeling for Battery Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick
2017-01-01
For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.
2017-08-01
Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.
A toy terrestrial carbon flow model
NASA Technical Reports Server (NTRS)
Parton, William J.; Running, Steven W.; Walker, Brian
1992-01-01
A generalized carbon flow model for the major terrestrial ecosystems of the world is reported. The model is a simplification of the Century model and the Forest-Biogeochemical model. Topics covered include plant production, decomposition and nutrient cycling, biomes, the utility of the carbon flow model for predicting carbon dynamics under global change, and possible applications to state-and-transition models and environmentally driven global vegetation models.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Dixon
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less
Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P
2018-04-01
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S.
2016-01-01
Need to Assess the Skill of Ecosystem Models Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. Northeast US Atlantis Marine Ecosystem Model We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. Skill Assessment Is Both Possible and Advisable We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment). PMID:26731540
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
NASA Astrophysics Data System (ADS)
Duane, G. S.; Selten, F.
2016-12-01
Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S
2016-01-01
Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment).
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Towards policy relevant environmental modeling: contextual validity and pragmatic models
Miles, Scott B.
2000-01-01
"What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post / VISION | About EMC EMC > Mesoscale Modeling > MODELS Home Mission Models R & D Collaborators Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING SREF
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Uses of Computer Simulation Models in Ag-Research and Everyday Life
USDA-ARS?s Scientific Manuscript database
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
ERIC Educational Resources Information Center
King, Gillian; Currie, Melissa; Smith, Linda; Servais, Michelle; McDougall, Janette
2008-01-01
A framework of operating models for interdisciplinary research programs in clinical service organizations is presented, consisting of a "clinician-researcher" skill development model, a program evaluation model, a researcher-led knowledge generation model, and a knowledge conduit model. Together, these models comprise a tailored, collaborative…
Modelling Students' Visualisation of Chemical Reaction
ERIC Educational Resources Information Center
Cheng, Maurice M. W.; Gilbert, John K.
2017-01-01
This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Planning Major Curricular Change.
ERIC Educational Resources Information Center
Kirkland, Travis P.
Decision-making and change models can take many forms. One researcher (Nordvall, 1982) has suggested five conceptual models for introducing change: a political model; a rational decision-making model; a social interaction decision model; the problem-solving method; and an adaptive/linkage model which is an amalgam of each of the other models.…
UNITED STATES METEOROLOGICAL DATA - DAILY AND HOURLY FILES TO SUPPORT PREDICTIVE EXPOSURE MODELING
ORD numerical models for pesticide exposure include a model of spray drift (AgDisp), a cropland pesticide persistence model (PRZM), a surface water exposure model (EXAMS), and a model of fish bioaccumulation (BASS). A unified climatological database for these models has been asse...
2009-12-01
Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less
NASA Astrophysics Data System (ADS)
Pincus, R.; Stevens, B. B.; Forster, P.; Collins, W.; Ramaswamy, V.
2014-12-01
The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies An enormous amount of attention has been paid to the diversity of responses in the CMIP and other multi-model ensembles. This diversity is normally interpreted as a distribution in climate sensitivity driven by some distribution of feedback mechanisms. Identification of these feedbacks relies on precise identification of the forcing to which each model is subject, including distinguishing true error from model diversity. The Radiative Forcing Model Intercomparison Project (RFMIP) aims to disentangle the role of forcing from model sensitivity as determinants of varying climate model response by carefully characterizing the radiative forcing to which such models are subject and by coordinating experiments in which it is specified. RFMIP consists of four activities: 1) An assessment of accuracy in flux and forcing calculations for greenhouse gases under past, present, and future climates, using off-line radiative transfer calculations in specified atmospheres with climate model parameterizations and reference models 2) Characterization and assessment of model-specific historical forcing by anthropogenic aerosols, based on coordinated diagnostic output from climate models and off-line radiative transfer calculations with reference models 3) Characterization of model-specific effective radiative forcing, including contributions of model climatology and rapid adjustments, using coordinated climate model integrations and off-line radiative transfer calculations with a single fast model 4) Assessment of climate model response to precisely-characterized radiative forcing over the historical record, including efforts to infer true historical forcing from patterns of response, by direct specification of non-greenhouse-gas forcing in a series of coordinated climate model integrations This talk discusses the rationale for RFMIP, provides an overview of the four activities, and presents preliminary motivating results.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Clarity versus complexity: land-use modeling as a practical tool for decision-makers
Sohl, Terry L.; Claggett, Peter
2013-01-01
The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta
2015-02-01
A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.
A toolbox and record for scientific models
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.
Donnolley, Natasha R; Chambers, Georgina M; Butler-Henderson, Kerryn A; Chapman, Michael G; Sullivan, Elizabeth A
2017-08-01
Without a standard terminology to classify models of maternity care, it is problematic to compare and evaluate clinical outcomes across different models. The Maternity Care Classification System is a novel system developed in Australia to classify models of maternity care based on their characteristics and an overarching broad model descriptor (Major Model Category). This study aimed to assess the extent of variability in the defining characteristics of models of care grouped to the same Major Model Category, using the Maternity Care Classification System. All public hospital maternity services in New South Wales, Australia, were invited to complete a web-based survey classifying two local models of care using the Maternity Care Classification System. A descriptive analysis of the variation in 15 attributes of models of care was conducted to evaluate the level of heterogeneity within and across Major Model Categories. Sixty-nine out of seventy hospitals responded, classifying 129 models of care. There was wide variation in a number of important attributes of models classified to the same Major Model Category. The category of 'Public hospital maternity care' contained the most variation across all characteristics. This study demonstrated that although models of care can be grouped into a distinct set of Major Model Categories, there are significant variations in models of the same type. This could result in seemingly 'like' models of care being incorrectly compared if grouped only by the Major Model Category. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.
2013-01-01
Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
Documenting Models for Interoperability and Reusability (proceedings)
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Documenting Models for Interoperability and Reusability
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Modified hyperbolic sine model for titanium dioxide-based memristive thin films
NASA Astrophysics Data System (ADS)
Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana
2018-03-01
Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.
Resident Role Modeling: "It Just Happens".
Sternszus, Robert; Macdonald, Mary Ellen; Steinert, Yvonne
2016-03-01
Role modeling by staff physicians is a significant component of the clinical teaching of students and residents. However, the importance of resident role modeling has only recently emerged, and residents' understanding of themselves as role models has yet to be explored. This study sought to understand residents' perceptions of themselves as role models, describe how residents learn about role modeling, and identify ways to improve resident role modeling. Fourteen semistructured interviews were conducted with residents in internal medicine, general surgery, and pediatrics at the McGill University Faculty of Medicine between April and September 2013. Interviews were audio-recorded and subsequently transcribed for analysis; iterative analysis followed principles of qualitative description. Four primary themes were identified through data analysis: residents perceived role modeling as the demonstration of "good" behaviors in the clinical context; residents believed that learning from their role modeling "just happens" as long as learners are "watching"; residents did not equate role modeling with being a role model; and residents learned about role modeling from watching their positive and negative role models. While residents were aware that students and junior colleagues learned from their modeling, they were often not aware of role modeling as it was occurring; they also believed that learning from role modeling "just happens" and did not always see themselves as role models. Helping residents view effective role modeling as a deliberate process rather than something that "just happens" may improve clinical teaching across the continuum of medical education.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data
Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.
2018-03-28
Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar
2016-02-15
Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.
Experiments in concept modeling for radiographic image reports.
Bell, D S; Pattison-Gordon, E; Greenes, R A
1994-01-01
OBJECTIVE: Development of methods for building concept models to support structured data entry and image retrieval in chest radiography. DESIGN: An organizing model for chest-radiographic reporting was built by analyzing manually a set of natural-language chest-radiograph reports. During model building, clinician-informaticians judged alternative conceptual structures according to four criteria: content of clinically relevant detail, provision for semantic constraints, provision for canonical forms, and simplicity. The organizing model was applied in representing three sample reports in their entirety. To explore the potential for automatic model discovery, the representation of one sample report was compared with the noun phrases derived from the same report by the CLARIT natural-language processing system. RESULTS: The organizing model for chest-radiographic reporting consists of 62 concept types and 17 relations, arranged in an inheritance network. The broadest types in the model include finding, anatomic locus, procedure, attribute, and status. Diagnoses are modeled as a subtype of finding. Representing three sample reports in their entirety added 79 narrower concept types. Some CLARIT noun phrases suggested valid associations among subtypes of finding, status, and anatomic locus. CONCLUSIONS: A manual modeling process utilizing explicitly stated criteria for making modeling decisions produced an organizing model that showed consistency in early testing. A combination of top-down and bottom-up modeling was required. Natural-language processing may inform model building, but algorithms that would replace manual modeling were not discovered. Further progress in modeling will require methods for objective model evaluation and tools for formalizing the model-building process. PMID:7719807
A strategy to establish Food Safety Model Repositories.
Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M
2015-07-02
Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.
The LUE data model for representation of agents and fields
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2017-04-01
Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Model and Interoperability using Meta Data Annotations
NASA Astrophysics Data System (ADS)
David, O.
2011-12-01
Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold
2016-01-01
Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
Assessing Ecosystem Model Performance in Semiarid Systems
NASA Astrophysics Data System (ADS)
Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.
2017-12-01
In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.
Alcan, Toros; Ceylanoğlu, Cenk; Baysal, Bekir
2009-01-01
To investigate the effects of different storage periods of alginate impressions on digital model accuracy. A total of 105 impressions were taken from a master model with three different brands of alginates and were poured into stone models in five different storage periods. In all, 21 stone models were poured and immediately were scanned, and 21 digital models were prepared. The remaining 84 impressions were poured after 1, 2, 3, and 4 days, respectively. Five linear measurements were made by three researchers on the master model, the stone models, and the digital models. Time-dependent deformation of alginate impressions at different storage periods and the accuracy of traditional stone models and digital models were evaluated separately. Both the stone models and the digital models were highly correlated with the master model. Significant deformities in the alginate impressions were noted at different storage periods of 1 to 4 days. Alginate impressions of different brands also showed significant differences between each other on the first, third, and fourth days. Digital orthodontic models are as reliable as traditional stone models and probably will become the standard for orthodontic clinical use. Storing alginate impressions in sealed plastic bags for up to 4 days caused statistically significant deformation of alginate impressions, but the magnitude of these deformations did not appear to be clinically relevant and had no adverse effect on digital modeling.
Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu
2011-04-01
A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.
Improved two-equation k-omega turbulence models for aerodynamic flows
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3
Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram
2017-01-01
Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
Potocki, J K; Tharp, H S
1993-01-01
Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
EzGal: A Flexible Interface for Stellar Population Synthesis Models
NASA Astrophysics Data System (ADS)
Mancone, Conor L.; Gonzalez, Anthony H.
2012-06-01
We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
Comment on ``Glassy Potts model: A disordered Potts model without a ferromagnetic phase''
NASA Astrophysics Data System (ADS)
Carlucci, Domenico M.
1999-10-01
We report the equivalence of the ``glassy Potts model,'' recently introduced by Marinari et al. and the ``chiral Potts model'' investigated by Nishimori and Stephen. Both models do not exhibit any spontaneous magnetization at low temperature, differently from the ordinary glass Potts model. The phase transition of the glassy Potts model is easily interpreted as the spin-glass transition of the ordinary random Potts model.
NASA Astrophysics Data System (ADS)
Cannizzo, John K.
2017-01-01
We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.
A novel microfluidic model can mimic organ-specific metastasis of circulating tumor cells.
Kong, Jing; Luo, Yong; Jin, Dong; An, Fan; Zhang, Wenyuan; Liu, Lilu; Li, Jiao; Fang, Shimeng; Li, Xiaojie; Yang, Xuesong; Lin, Bingcheng; Liu, Tingjiao
2016-11-29
A biomimetic microsystem might compensate costly and time-consuming animal metastatic models. Herein we developed a biomimetic microfluidic model to study cancer metastasis. Primary cells isolated from different organs were cultured on the microlfuidic model to represent individual organs. Breast and salivary gland cancer cells were driven to flow over primary cell culture chambers, mimicking dynamic adhesion of circulating tumor cells (CTCs) to endothelium in vivo. These flowing artificial CTCs showed different metastatic potentials to lung on the microfluidic model. The traditional nude mouse model of lung metastasis was performed to investigate the physiological similarity of the microfluidic model to animal models. It was found that the metastatic potential of different cancer cells assessed by the microfluidic model was in agreement with that assessed by the nude mouse model. Furthermore, it was demonstrated that the metastatic inhibitor AMD3100 inhibited lung metastasis effectively in both the microfluidic model and the nude mouse model. Then the microfluidic model was used to mimick liver and bone metastasis of CTCs and confirm the potential for research of multiple-organ metastasis. Thus, the metastasis of CTCs to different organs was reconstituted on the microfluidic model. It may expand the capabilities of traditional cell culture models, providing a low-cost, time-saving, and rapid alternative to animal models.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Mutant mice: experimental organisms as materialised models in biomedicine.
Huber, Lara; Keuck, Lara K
2013-09-01
Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.
2015-01-01
Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124
ASTP ranging system mathematical model
NASA Technical Reports Server (NTRS)
Ellis, M. R.; Robinson, L. H.
1973-01-01
A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-01-01
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-02-08
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Keating; W.Statham
2004-02-12
The purpose of this model report is to provide documentation of the conceptual and mathematical model (ASHPLUME) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. The ASHPLUME conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through the Yucca Mountain repository and downwind transport of contaminated tephra. The ASHPLUME mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the groundmore » surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report will improve and clarify the previous documentation of the ASHPLUME mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model.« less
Model-Based Reasoning in Upper-division Lab Courses
NASA Astrophysics Data System (ADS)
Lewandowski, Heather
2015-05-01
Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.
2013-01-01
Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557
Model Selection in Systems Biology Depends on Experimental Design
Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.
2014-01-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483
Model selection in systems biology depends on experimental design.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H
2014-06-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.
O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.
A Two-Zone Multigrid Model for SI Engine Combustion Simulation Using Detailed Chemistry
Ge, Hai-Wen; Juneja, Harmit; Shi, Yu; ...
2010-01-01
An efficient multigrid (MG) model was implemented for spark-ignited (SI) engine combustion modeling using detailed chemistry. The model is designed to be coupled with a level-set-G-equation model for flame propagation (GAMUT combustion model) for highly efficient engine simulation. The model was explored for a gasoline direct-injection SI engine with knocking combustion. The numerical results using the MG model were compared with the results of the original GAMUT combustion model. A simpler one-zone MG model was found to be unable to reproduce the results of the original GAMUT model. However, a two-zone MG model, which treats the burned and unburned regionsmore » separately, was found to provide much better accuracy and efficiency than the one-zone MG model. Without loss in accuracy, an order of magnitude speedup was achieved in terms of CPU and wall times. To reproduce the results of the original GAMUT combustion model, either a low searching level or a procedure to exclude high-temperature computational cells from the grouping should be applied to the unburned region, which was found to be more sensitive to the combustion model details.« less
Statistical considerations on prognostic models for glioma
Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.
2016-01-01
Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ting, Eric; Nguyen, Daniel; Dao, Tung; Trinh, Khanh
2013-01-01
This paper presents a coupled vortex-lattice flight dynamic model with an aeroelastic finite-element model to predict dynamic characteristics of a flexible wing transport aircraft. The aircraft model is based on NASA Generic Transport Model (GTM) with representative mass and stiffness properties to achieve a wing tip deflection about twice that of a conventional transport aircraft (10% versus 5%). This flexible wing transport aircraft is referred to as an Elastically Shaped Aircraft Concept (ESAC) which is equipped with a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for active wing shaping control for drag reduction. A vortex-lattice aerodynamic model of the ESAC is developed and is coupled with an aeroelastic finite-element model via an automated geometry modeler. This coupled model is used to compute static and dynamic aeroelastic solutions. The deflection information from the finite-element model and the vortex-lattice model is used to compute unsteady contributions to the aerodynamic force and moment coefficients. A coupled aeroelastic-longitudinal flight dynamic model is developed by coupling the finite-element model with the rigid-body flight dynamic model of the GTM.
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
NASA Astrophysics Data System (ADS)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy
2017-12-01
We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Models and Measurements Intercomparison 2
NASA Technical Reports Server (NTRS)
Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)
1999-01-01
Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.
A Logical Account of Diagnosis with Multiple Theories
NASA Technical Reports Server (NTRS)
Pandurang, P.; Lum, Henry Jr. (Technical Monitor)
1994-01-01
Model-based diagnosis is a powerful, first-principles approach to diagnosis. The primary drawback with model-based diagnosis is that it is based on a system model, and this model might be inappropriate. The inappropriateness of models usually stems from the fundamental tradeoff between completeness and efficiency. Recently, Struss has developed an elegant proposal for diagnosis with multiple models. Struss characterizes models as relations and develops a precise notion of abstraction. He defines relations between models and analyzes the effect of a model switch on the space of possible diagnoses. In this paper we extend Struss's proposal in three ways. First, our account of diagnosis with multiple models is based on representing models as more expressive first-order theories, rather than as relations. A key technical contribution is the use of a general notion of abstraction based on interpretations between theories. Second, Struss conflates component modes with models, requiring him to define models relations such as choices which result in non-relational models. We avoid this problem by differentiating component modes from models. Third, we present a more general account of simplifications that correctly handles situations where the simplification contradicts the base theory.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
USDA-ARS?s Scientific Manuscript database
To improve climate change impact estimates, multi-model ensembles (MMEs) have been suggested. MMEs enable quantifying model uncertainty, and their medians are more accurate than that of any single model when compared with observations. However, multi-model ensembles are costly to execute, so model i...
A Comparative Analysis on Models of Higher Education Massification
ERIC Educational Resources Information Center
Pan, Maoyuan; Luo, Dan
2008-01-01
Four financial models of massification of higher education are discussed in this essay. They are American model, Western European model, Southeast Asian and Latin American model and the transition countries model. The comparison of the four models comes to the conclusion that taking advantage of nongovernmental funding is fundamental to dealing…
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
Frank R., III Thompson
2009-01-01
Habitat models are widely used in bird conservation planning to assess current habitat or populations and to evaluate management alternatives. These models include species-habitat matrix or database models, habitat suitability models, and statistical models that predict abundance. While extremely useful, these approaches have some limitations.
ERIC Educational Resources Information Center
Cheng, Meng-Fei; Lin, Jang-Long
2015-01-01
Understanding the nature of models and engaging in modeling practice have been emphasized in science education. However, few studies discuss the relationships between students' views of scientific models and their ability to develop those models. Hence, this study explores the relationship between students' views of scientific models and their…
Integrated research in constitutive modelling at elevated temperatures, part 2
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Allen, D. H.
1986-01-01
Four current viscoplastic models are compared experimentally with Inconel 718 at 1100 F. A series of tests were performed to create a sufficient data base from which to evaluate material constants. The models used include Bodner's anisotropic model; Krieg, Swearengen, and Rhode's model; Schmidt and Miller's model; and Walker's exponential model.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
NASA Astrophysics Data System (ADS)
Jang, S.; Moon, Y.; Na, H.
2012-12-01
We have made a comparison of CME-associated shock arrival times at the earth based on the WSA-ENLIL model with three cone models using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. For this study we consider three different cone models (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine CME cone parameters (radial velocity, angular width and source location), which are used for input parameters of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the elliptical cone model is 10 hours, which is about 2 hours smaller than those of the other models. However, this value is still larger than that (8.7 hours) of an empirical model by Kim et al. (2007). We are investigating several possibilities on relatively large errors of the WSA-ENLIL cone model, which may be caused by CME-CME interaction, background solar wind speed, and/or CME density enhancement.
Modeling of the radiation belt megnetosphere in decisional timeframes
Koller, Josef; Reeves, Geoffrey D; Friedel, Reiner H.W.
2013-04-23
Systems and methods for calculating L* in the magnetosphere with essentially the same accuracy as with a physics based model at many times the speed by developing a surrogate trained to be a surrogate for the physics-based model. The trained model can then beneficially process input data falling within the training range of the surrogate model. The surrogate model can be a feedforward neural network and the physics-based model can be the TSK03 model. Operatively, the surrogate model can use parameters on which the physics-based model was based, and/or spatial data for the location where L* is to be calculated. Surrogate models should be provided for each of a plurality of pitch angles. Accordingly, a surrogate model having a closed drift shell can be used from the plurality of models. The feedforward neural network can have a plurality of input-layer units, there being at least one input-layer unit for each physics-based model parameter, a plurality of hidden layer units and at least one output unit for the value of L*.
Cowell, Rosemary A; Bussey, Timothy J; Saksida, Lisa M
2012-11-01
We describe how computational models can be useful to cognitive and behavioral neuroscience, and discuss some guidelines for deciding whether a model is useful. We emphasize that because instantiating a cognitive theory as a computational model requires specification of an explicit mechanism for the function in question, it often produces clear and novel behavioral predictions to guide empirical research. However, computational modeling in cognitive and behavioral neuroscience remains somewhat rare, perhaps because of misconceptions concerning the use of computational models (in particular, connectionist models) in these fields. We highlight some common misconceptions, each of which relates to an aspect of computational models: the problem space of the model, the level of biological organization at which the model is formulated, and the importance (or not) of biological plausibility, parsimony, and model parameters. Careful consideration of these aspects of a model by empiricists, along with careful delineation of them by modelers, may facilitate communication between the two disciplines and promote the use of computational models for guiding cognitive and behavioral experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ting, Eric; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents a static aeroelastic model and longitudinal trim model for the analysis of a flexible wing transport aircraft. The static aeroelastic model is built using a structural model based on finite-element modeling and coupled to an aerodynamic model that uses vortex-lattice solution. An automatic geometry generation tool is used to close the loop between the structural and aerodynamic models. The aeroelastic model is extended for the development of a three degree-of-freedom longitudinal trim model for an aircraft with flexible wings. The resulting flexible aircraft longitudinal trim model is used to simultaneously compute the static aeroelastic shape for the aircraft model and the longitudinal state inputs to maintain an aircraft trim state. The framework is applied to an aircraft model based on the NASA Generic Transport Model (GTM) with wing structures allowed to flexibly deformed referred to as the Elastically Shaped Aircraft Concept (ESAC). The ESAC wing mass and stiffness properties are based on a baseline "stiff" values representative of current generation transport aircraft.
Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara
2016-05-09
A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.
Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.
2013-01-01
The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489
Modelling daily water temperature from air temperature for the Missouri River.
Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana
2018-01-01
The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
NASA Astrophysics Data System (ADS)
Benettin, G.; Pasquali, S.; Ponno, A.
2018-05-01
FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.