Science.gov

Sample records for excursion set model

  1. Excursion set peaks: a self-consistent model of dark halo abundances and clustering

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Sheth, Ravi K.; Desjacques, Vincent

    2013-05-01

    We describe how to extend the excursion set peaks framework so that its predictions of dark halo abundances and clustering can be compared directly with simulations. These extensions include: a halo mass definition which uses the TopHat filter in real space; the mean dependence of the critical density for collapse δc on halo mass m; and the scatter around this mean value. All three of these are motivated by the physics of triaxial rather than spherical collapse. A comparison of the resulting mass function with N-body results shows that, if one uses δc(m) and its scatter as determined from simulations, then all three are necessary ingredients for obtaining ˜10 per cent accuracy. For example, assuming a constant value of δc with no scatter, as motivated by the physics of spherical collapse, leads to many more massive haloes than seen in simulations. The same model is also in excellent agreement with N-body results for the linear halo bias, especially at the high mass end where the traditional peak-background split argument applied to the mass function fit is known to underpredict the measured bias by ˜10 per cent. In the excursion set language, our model is about walks centred on special positions (peaks) in the initial conditions - we discuss what it implies for the usual calculation in which all walks contribute to the statistics.

  2. Peaks theory and the excursion set approach

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Sheth, Ravi K.

    2012-11-01

    We describe a model of dark matter halo abundances and clustering which combines the two most widely used approaches to this problem: that based on peaks and the other based on excursion sets. Our approach can be thought of as addressing the cloud-in-cloud problem for peaks and/or modifying the excursion set approach so that it averages over a special subset, rather than all possible walks. In this respect, it seeks to account for correlations between steps in the walk as well as correlations between walks. We first show how the excursion set and peaks models can be written in the same formalism, and then use this correspondence to write our combined excursion set peaks model. We then give simple expressions for the mass function and bias, showing that even the linear halo bias factor is predicted to be k-dependent as a consequence of the non-locality associated with the peak constraint. At large masses, our model has little or no need to rescale the variable δc from the value associated with spherical collapse, and suggests a simple explanation for why the linear halo bias factor appears to lie above that based on the peak-background split at high masses when such a rescaling is assumed. Although we have concentrated on peaks, our analysis is more generally applicable to other traditionally single-scale analyses of large-scale structure.

  3. Excursion-Set-Mediated Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.

  4. An Excursion Set Model of the Cosmic Web: the Abundance of Sheets, Filaments And Halos

    SciTech Connect

    Shen, Jiajian; Abel, Tom; Mo, Houjun; Sheth, Ravi; /Pennsylvania U.

    2006-01-11

    We discuss an analytic approach for modeling structure formation in sheets, filaments and knots. This is accomplished by combining models of triaxial collapse with the excursion set approach: sheets are defined as objects which have collapsed along only one axis, filaments have collapsed along two axes, and halos are objects in which triaxial collapse is complete. In the simplest version of this approach, which we develop here, large scale structure shows a clear hierarchy of morphologies: the mass in large-scale sheets is partitioned up among lower mass filaments, which themselves are made-up of still lower mass halos. Our approach provides analytic estimates of the mass fraction in sheets, filaments and halos, and its evolution, for any background cosmological model and any initial fluctuation spectrum. In the currently popular {Lambda}CDM model, our analysis suggests that more than 99% of the mass in sheets, and 72% of the mass in filaments, is stored in objects more massive than 10{sup 10}M{sub {circle_dot}} at the present time. For halos, this number is only 46%. Our approach also provides analytic estimates of how halo abundances at any given time correlate with the morphology of the surrounding large-scale structure, and how halo evolution correlates with the morphology of large scale structure.

  5. An excursion-set model for the structure of giant molecular clouds and the interstellar medium

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2012-07-01

    The interstellar medium (ISM) is governed by supersonic turbulence on a range of scales. We use this simple fact to develop a rigorous excursion-set model for the formation, structure and time evolution of dense gas structures [e.g. giant molecular clouds (GMCs), massive clumps and cores]. Supersonic turbulence drives the density distribution in non-self-gravitating regions to a lognormal with dispersion increasing with Mach number. We generalize this to include scales ≳h (the disc scale-height), and use it to construct the statistical properties of the density field smoothed on a scale R. We then compare conditions for self-gravitating collapse including thermal, turbulent and rotational (disc shear) support (reducing to the Jeans/Toomre criterion on small/large scales). We show that this becomes a well-defined barrier crossing problem. As such, an exact 'bound object mass function' can be derived, from scales of the sonic length to well above the disc Jeans mass. This agrees remarkably well with observed GMC mass functions in the Milky Way and other galaxies, with the only inputs being the total mass and size of the galaxies (to normalize the model). This explains the cut-off of the mass function and its power-law slope (close to, but slightly shallower than, -2). The model also predicts the linewidth-size and size-mass relations of clouds and the dependence of residuals from these relations on mean surface density/pressure, in excellent agreement with observations. We use this to predict the spatial correlation function/clustering of clouds and, by extension, star clusters; these also agree well with observations. We predict the size/mass function of 'bubbles' or 'holes' in the ISM, and show that this can account for the observed H I hole distribution without requiring any local feedback/heating sources. We generalize the model to construct time-dependent 'merger/fragmentation trees' which can be used to follow cloud evolution and construct semi

  6. Non-Gaussianity and Excursion Set Theory: Halo Bias

    SciTech Connect

    Adshead, Peter; Baxter, Eric J.; Dodelson, Scott; Lidz, Adam

    2012-09-01

    We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales as $k^{-2}$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.

  7. Halo bias in the excursion set approach with correlated steps

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Sheth, Ravi K.

    2012-01-01

    In the excursion set approach, halo abundances and clustering are closely related. This relation is exploited in many modern methods which seek to constrain cosmological parameters on the basis of the observed spatial distribution of clusters. However, to obtain analytic expressions for these quantities, most excursion-set-based predictions ignore the fact that, although different k modes in the initial Gaussian field are uncorrelated, this is not true in real space: the values of the density field at a given spatial position, when smoothed on different real-space scales, are correlated in a non-trivial way. We show that when the excursion set approach is extended to include such correlations, then one must be careful to account for the fact that the associated prediction for halo bias is explicitly a real-space quantity. Therefore, care must be taken while comparing the predictions of this approach with measurements in simulations, which are typically made in Fourier space. We show how to correct for this effect, and demonstrate that ignorance of this effect in recent analyses of halo bias has led to incorrect conclusions and biased constraints.

  8. Scale-dependent halo bias in the excursion set approach

    NASA Astrophysics Data System (ADS)

    Musso, Marcello; Paranjape, Aseem; Sheth, Ravi K.

    2012-12-01

    If one accounts for correlations between scales, then non-local, k-dependent halo bias is part and parcel of the excursion set approach, and hence of halo model predictions for galaxy bias. We present an analysis that distinguishes between a number of different effects, each of which contributes to scale-dependent bias in real space. We show how to isolate these effects and remove the scale dependence, order by order, by cross-correlating the halo field with suitably transformed versions of the mass field. These transformations may be thought of as simple one-point, two-scale measurements that allow one to estimate quantities which are usually constrained using n-point statistics. As part of our analysis, we present a simple analytic approximation for the first-crossing distribution of walks with correlated steps which are constrained to pass through a specified point, and demonstrate its accuracy. Although we concentrate on non-linear, non-local bias with respect to a Gaussian random field, we show how to generalize our analysis to more general fields.

  9. Excursion sets and non-Gaussian void statistics

    NASA Astrophysics Data System (ADS)

    D'Amico, Guido; Musso, Marcello; Noreña, Jorge; Paranjape, Aseem

    2011-01-01

    Primordial non-Gaussianity (NG) affects the large scale structure (LSS) of the Universe by leaving an imprint on the distribution of matter at late times. Much attention has been focused on using the distribution of collapsed objects (i.e. dark matter halos and the galaxies and galaxy clusters that reside in them) to probe primordial NG. An equally interesting and complementary probe however is the abundance of extended underdense regions or voids in the LSS. The calculation of the abundance of voids using the excursion set formalism in the presence of primordial NG is subject to the same technical issues as the one for halos, which were discussed e.g. in Ref. [G. D’Amico, M. Musso, J. Noreña, and A. Paranjape, arXiv:1005.1203.]. However, unlike the excursion set problem for halos which involved random walks in the presence of one barrier δc, the void excursion set problem involves two barriers δv and δc. This leads to a new complication introduced by what is called the “void-in-cloud” effect discussed in the literature, which is unique to the case of voids. We explore a path integral approach which allows us to carefully account for all these issues, leading to a rigorous derivation of the effects of primordial NG on void abundances. The void-in-cloud issue, in particular, makes the calculation conceptually rather different from the one for halos. However, we show that its final effect can be described by a simple yet accurate approximation. Our final void abundance function is valid on larger scales than the expressions of other authors, while being broadly in agreement with those expressions on smaller scales.

  10. Excursion sets and non-Gaussian void statistics

    SciTech Connect

    D'Amico, Guido; Musso, Marcello; Paranjape, Aseem; Norena, Jorge

    2011-01-15

    Primordial non-Gaussianity (NG) affects the large scale structure (LSS) of the Universe by leaving an imprint on the distribution of matter at late times. Much attention has been focused on using the distribution of collapsed objects (i.e. dark matter halos and the galaxies and galaxy clusters that reside in them) to probe primordial NG. An equally interesting and complementary probe however is the abundance of extended underdense regions or voids in the LSS. The calculation of the abundance of voids using the excursion set formalism in the presence of primordial NG is subject to the same technical issues as the one for halos, which were discussed e.g. in Ref. [51][G. D'Amico, M. Musso, J. Norena, and A. Paranjape, arXiv:1005.1203.]. However, unlike the excursion set problem for halos which involved random walks in the presence of one barrier {delta}{sub c}, the void excursion set problem involves two barriers {delta}{sub v} and {delta}{sub c}. This leads to a new complication introduced by what is called the 'void-in-cloud' effect discussed in the literature, which is unique to the case of voids. We explore a path integral approach which allows us to carefully account for all these issues, leading to a rigorous derivation of the effects of primordial NG on void abundances. The void-in-cloud issue, in particular, makes the calculation conceptually rather different from the one for halos. However, we show that its final effect can be described by a simple yet accurate approximation. Our final void abundance function is valid on larger scales than the expressions of other authors, while being broadly in agreement with those expressions on smaller scales.

  11. The excursion set approach in non-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Musso, Marcello; Sheth, Ravi K.

    2014-04-01

    Insight into a number of interesting questions in cosmology can be obtained by studying the first crossing distributions of physically motivated barriers by random walks with correlated steps: higher mass objects are associated with walks that cross the barrier in fewer steps. We write the first crossing distribution as a formal series, ordered by the number of times a walk upcrosses the barrier. Since the fraction of walks with many upcrossings is negligible if the walk has not taken many steps, the leading order term in this series is the most relevant for understanding the massive objects of most interest in cosmology. For walks associated with Gaussian random fields, this first term only requires knowledge of the bivariate distribution of the walk height and slope, and provides an excellent approximation to the first crossing distribution for all barriers and smoothing filters of current interest. We show that this simplicity survives when extending the approach to the case of non-Gaussian random fields. For non-Gaussian fields which are obtained by deterministic transformations of a Gaussian, the first crossing distribution is simply related to that for Gaussian walks crossing a suitably rescaled barrier. Our analysis shows that this is a useful way to think of the generic case as well. Although our study is motivated by the possibility that the primordial fluctuation field was non-Gaussian, our results are general. In particular, they do not assume the non-Gaussianity is small, so they may be viewed as the solution to an excursion set analysis of the late-time, non-linear fluctuation field rather than the initial one. They are also useful for models in which the barrier height is determined by quantities other than the initial density, since most other physically motivated variables (such as the shear) are usually stochastic and non-Gaussian. We use the Lognormal transformation to illustrate some of our arguments.

  12. STATISTICS OF DARK MATTER HALOS FROM THE EXCURSION SET APPROACH

    SciTech Connect

    Lapi, A.; Salucci, P.; Danese, L.

    2013-08-01

    We exploit the excursion set approach in integral formulation to derive novel, accurate analytic approximations of the unconditional and conditional first crossing distributions for random walks with uncorrelated steps and general shapes of the moving barrier; we find the corresponding approximations of the unconditional and conditional halo mass functions for cold dark matter (DM) power spectra to represent very well the outcomes of state-of-the-art cosmological N-body simulations. In addition, we apply these results to derive, and confront with simulations, other quantities of interest in halo statistics, including the rates of halo formation and creation, the average halo growth history, and the halo bias. Finally, we discuss how our approach and main results change when considering random walks with correlated instead of uncorrelated steps, and warm instead of cold DM power spectra.

  13. Testing the self-consistency of the excursion set approach to predicting the dark matter halo mass function.

    PubMed

    Achitouv, I; Rasera, Y; Sheth, R K; Corasaniti, P S

    2013-12-01

    The excursion set approach provides a framework for predicting how the abundance of dark matter halos depends on the initial conditions. A key ingredient of this formalism is the specification of a critical overdensity threshold (barrier) which protohalos must exceed if they are to form virialized halos at a later time. However, to make its predictions, the excursion set approach explicitly averages over all positions in the initial field, rather than the special ones around which halos form, so it is not clear that the barrier has physical motivation or meaning. In this Letter we show that once the statistical assumptions which underlie the excursion set approach are considered a drifting diffusing barrier model does provide a good self-consistent description both of halo abundance as well as of the initial overdensities of the protohalo patches. PMID:24476252

  14. THE HALO MASS FUNCTION FROM EXCURSION SET THEORY. II. THE DIFFUSING BARRIER

    SciTech Connect

    Maggiore, Michele; Riotto, Antonio

    2010-07-01

    In excursion set theory, the computation of the halo mass function is mapped into a first-passage time process in the presence of a barrier, which in the spherical collapse model is a constant and in the ellipsoidal collapse model is a fixed function of the variance of the smoothed density field. However, N-body simulations show that dark matter halos grow through a mixture of smooth accretion, violent encounters, and fragmentations, and modeling halo collapse as spherical, or even as ellipsoidal, is a significant oversimplification. In addition, the very definition of what is a dark matter halo, both in N-body simulations and observationally, is a difficult problem. We propose that some of the physical complications inherent to a realistic description of halo formation can be included in the excursion set theory framework, at least at an effective level, by taking into account that the critical value for collapse is not a fixed constant {delta}{sub c}, as in the spherical collapse model, nor a fixed function of the variance {sigma} of the smoothed density field, as in the ellipsoidal collapse model, but rather is itself a stochastic variable, whose scatter reflects a number of complicated aspects of the underlying dynamics. Solving the first-passage time problem in the presence of a diffusing barrier we find that the exponential factor in the Press-Schechter mass function changes from exp{l_brace}-{delta}{sup 2}{sub c}/2{sigma}{sup 2{r_brace}} to exp{l_brace}-a{delta}{sup 2}{sub c}/2{sigma}{sup 2{r_brace}}, where a = 1/(1 + D{sub B}) and D{sub B} is the diffusion coefficient of the barrier. The numerical value of D{sub B} , and therefore the corresponding value of a, depends among other things on the algorithm used for identifying halos. We discuss the physical origin of the stochasticity of the barrier and, from recent N-body simulations that studied the properties of the collapse barrier, we deduce a value D{sub B} {approx_equal} 0.25. Our model then predicts a

  15. Statistics of dark matter halos in the excursion set peak framework

    SciTech Connect

    Lapi, A.; Danese, L. E-mail: danese@sissa.it

    2014-07-01

    We derive approximated, yet very accurate analytical expressions for the abundance and clustering properties of dark matter halos in the excursion set peak framework; the latter relies on the standard excursion set approach, but also includes the effects of a realistic filtering of the density field, a mass-dependent threshold for collapse, and the prescription from peak theory that halos tend to form around density maxima. We find that our approximations work excellently for diverse power spectra, collapse thresholds and density filters. Moreover, when adopting a cold dark matter power spectra, a tophat filtering and a mass-dependent collapse threshold (supplemented with conceivable scatter), our approximated halo mass function and halo bias represent very well the outcomes of cosmological N-body simulations.

  16. Large field excursions from a few site relaxion model

    NASA Astrophysics Data System (ADS)

    Fonseca, N.; de Lima, L.; Machado, C. S.; Matheus, R. D.

    2016-07-01

    Relaxion models are an interesting new avenue to explain the radiative stability of the Standard Model scalar sector. They require very large field excursions, which are difficult to generate in a consistent UV completion and to reconcile with the compact field space of the relaxion. We propose an N -site model which naturally generates the large decay constant needed to address these issues. Our model offers distinct advantages with respect to previous proposals: the construction involves non-Abelian fields, allowing for controlled high-energy behavior and more model building possibilities, both in particle physics and inflationary models, and also admits a continuum limit when the number of sites is large, which may be interpreted as a warped extra dimension.

  17. Dark-matter halo assembly bias: Environmental dependence in the non-Markovian excursion-set theory

    SciTech Connect

    Zhang, Jun; Ma, Chung-Pei; Riotto, Antonio

    2014-02-10

    In the standard excursion-set model for the growth of structure, the statistical properties of halos are governed by the halo mass and are independent of the larger-scale environment in which the halos reside. Numerical simulations, however, have found the spatial distributions of halos to depend not only on their mass but also on the details of their assembly history and environment. Here we present a theoretical framework for incorporating this 'assembly bias' into the excursion-set model. Our derivations are based on modifications of the path-integral approach of Maggiore and Riotto that models halo formation as a non-Markovian random-walk process. The perturbed density field is assumed to evolve stochastically with the smoothing scale and exhibits correlated walks in the presence of a density barrier. We write down conditional probabilities for multiple barrier crossings and derive from them analytic expressions for descendant and progenitor halo mass functions and halo merger rates as a function of both halo mass and the linear overdensity δ {sub e} of the larger-scale environment of the halo. Our results predict a higher halo merger rate and higher progenitor halo mass function in regions of higher overdensity, consistent with the behavior seen in N-body simulations.

  18. Constrained simulations and excursion sets: understanding the risks and benefits of `genetically modified' haloes

    NASA Astrophysics Data System (ADS)

    Porciani, Cristiano

    2016-09-01

    Constrained realisations of Gaussian random fields are used in cosmology to design special initial conditions for numerical simulations. We review this approach and its application to density peaks providing several worked-out examples. We then critically discuss the recent proposal to use constrained realisations to modify the linear density field within and around the Lagrangian patches that form dark-matter haloes. The ambitious concept is to forge `genetically modified' haloes with some desired properties after the non-linear evolution. We demonstrate that the original implementation of this method is not exact but approximate because it tacitly assumes that protohaloes sample a set of random points with a fixed mean overdensity. We show that carrying out a full genetic modification is a formidable and daunting task requiring a mathematical understanding of what determines the biased locations of protohaloes in the linear density field. We discuss approximate solutions based on educated guesses regarding the nature of protohaloes. We illustrate how the excursion-set method can be adapted to predict the non-linear evolution of the modified patches and thus fine tune the constraints that are necessary to obtain preselected halo properties. This technique allows us to explore the freedom around the original algorithm for genetic modification. We find that the quantity which is most sensitive to changes is the halo mass-accretion rate at the mass scale on which the constraints are set. Finally we discuss constraints based on the protohalo angular momenta.

  19. A simple model for geomagnetic field excursions and inferences for palaeomagnetic observations

    NASA Astrophysics Data System (ADS)

    Brown, M. C.; Korte, M.

    2016-05-01

    We explore simple excursion scenarios by imposing changes on the axial dipole component of the Holocene geomagnetic field model CALS10k.2 and investigate implications for our understanding of palaeomagnetic observations of excursions. Our findings indicate that globally observed directions of fully opposing polarity are only possible when the axial dipole reverses: linearly decaying the axial dipole to zero and then reestablishing it with the same sign produces a global intensity minimum, but does not produce fully reversed directions globally. Reversing the axial dipole term increases the intensity of the geomagnetic field observed at Earth's surface across the mid-point of the excursion, which results in a double-dip intensity structure during the excursion. Only a limited number of palaeomagnetic records of excursions contain such a double-dip intensity structure. Rather, the maximum directional change is coeval with a geomagnetic field intensity minimum.

  20. Non-Gaussian halo abundances in the excursion set approach with correlated steps

    NASA Astrophysics Data System (ADS)

    Musso, Marcello; Paranjape, Aseem

    2012-02-01

    We study the effects of primordial non-Gaussianity on the large-scale structure in the excursion set approach, accounting for correlations between steps of the random walks in the smoothed initial density field. These correlations are induced by realistic smoothing filters (as opposed to a filter that is sharp in k-space), but have been ignored by many analyses to date. We present analytical arguments - building on existing results for Gaussian initial conditions - which suggest that the effect of the filter at large smoothing scales is remarkably simple, and is in fact identical to what happens in the Gaussian case: the non-Gaussian walks behave as if they were smooth and deterministic, or 'completely correlated'. As a result, the first crossing distribution (which determines e.g. halo abundances) follows from the single-scale statistics of the non-Gaussian density field - the so-called 'cloud-in-cloud' problem does not exist for completely correlated walks. Also, the answer from single-scale statistics is simply one half that for sharp-k walks. We explicitly test these arguments using Monte Carlo simulations of non-Gaussian walks, showing that the resulting first crossing distributions, and in particular the factor 1/2 argument, are remarkably insensitive to variations in the power spectrum and the defining non-Gaussian process. We also use our Monte Carlo walks to test some of the existing prescriptions for the non-Gaussian first crossing distribution. Since the factor 1/2 holds for both Gaussian and non-Gaussian initial conditions, it provides a theoretical motivation (the first, to our knowledge) for the common practice of analytically prescribing a ratio of non-Gaussian to Gaussian halo abundances.

  1. Halo abundances and counts-in-cells: the excursion set approach with correlated steps

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Lam, Tsz Yan; Sheth, Ravi K.

    2012-02-01

    The excursion set approach has been used to make predictions for a number of interesting quantities in studies of non-linear hierarchical clustering. These include the halo mass function, halo merger rates, halo formation times and masses, halo clustering, analogous quantities for voids and the distribution of dark matter counts in randomly placed cells. The approach assumes that all these quantities can be mapped to problems involving the first-crossing distribution of a suitably chosen barrier by random walks. Most analytic expressions for these distributions ignore the fact that, although different k-modes in the initial Gaussian field are uncorrelated, this is not true in real space: the values of the density field at a given spatial position, when smoothed on different real-space scales, are correlated in a non-trivial way. As a result, the problem is to estimate first crossing distribution by random walks having correlated rather than uncorrelated steps. In 1990, Peacock & Heavens presented a simple approximation for the first crossing distribution of a single barrier of constant height by walks with correlated steps. We show that their approximation can be thought of as a correction to the distribution associated with what we call smooth completely correlated walks. We then use this insight to extend their approach to treat moving barriers, as well as walks that are constrained to pass through a certain point before crossing the barrier. For the latter, we show that a simple rescaling, inspired by bivariate Gaussian statistics, of the unconditional first crossing distribution, accurately describes the conditional distribution, independent of the choice of analytical prescription for the former. In all cases, comparison with Monte Carlo solutions of the problem shows reasonably good agreement. This represents the first explicit demonstration of the accuracy of an analytic treatment of all these aspects of the correlated steps problem. While our main focus is

  2. Model-based control rescues boiler from steam-temperature excursions

    SciTech Connect

    Hanson, K.; Werre, J.; Chloupek, J.; Richerson, J.

    1995-05-01

    This article describes how, after operators of a lignite-fired boiler wrestled for years to control its main steam temperature, a switch to model-based control resolved the problem. Decoupling of control loops was essential. Montana Dakota Utilities (MDU) is the operator of the Coyote station, a 450-MW unit located at Beulah, ND, in the heart of lignite country. Owners of the plant are MDU, Northern Municipal Power Agency, Northwestern Public Service Co., and Otter Tail Power Co. The unit, a Babcock and Wilcox Co. (Barberton, Ohio) drum-boiler design, came on line in 1981. It burns lignite with a heating value of 6,900 Btu/lb using 12 cyclones. Because of unique boiler characteristics and controls implementation using several different control systems, the Coyote station had experienced significant steam-temperature excursions over the years.

  3. Development of a Bayesian Belief Network Runway Incursion and Excursion Model

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2014-01-01

    In a previous work, a statistical analysis of runway incursion (RI) event data was conducted to ascertain the relevance of this data to the top ten Technical Challenges (TC) of the National Aeronautics and Space Administration (NASA) Aviation Safety Program (AvSP). The study revealed connections to several of the AvSP top ten TC and identified numerous primary causes and contributing factors of RI events. The statistical analysis served as the basis for developing a system-level Bayesian Belief Network (BBN) model for RI events, also previously reported. Through literature searches and data analysis, this RI event network has now been extended to also model runway excursion (RE) events. These RI and RE event networks have been further modified and vetted by a Subject Matter Expert (SME) panel. The combined system-level BBN model will allow NASA to generically model the causes of RI and RE events and to assess the effectiveness of technology products being developed under NASA funding. These products are intended to reduce the frequency of runway safety incidents/accidents, and to improve runway safety in general. The development and structure of the BBN for both RI and RE events are documented in this paper.

  4. Impact of Atmosphere-sea Exchange on the Isotopic Expression of Carbon Excursions: Observations and Modeling of OAE-1a

    NASA Astrophysics Data System (ADS)

    Finkelstein, D. B.; Pratt, L. M.; Brassell, S. C.; Montañez, I. P.

    2005-12-01

    Negative carbon isotope excursions are a recurring phenomenon in earth history (e.g., Permo-Triassic boundary, Jurassic and Cretaceous oceanic anoxic events, and the Paleocene-Eocene Thermal Maximum) variously attributed to destabilization of methane clathrates, a decrease in primary productivity, intensified volcanism, and more recently to widespread peat fires. Each forcing mechanism invoked accounts for both the magnitude of the negative isotopic shift and the reservoir required to drive the shift as observed at one to several locales. Studies rarely consider the effect of latitudinal temperature changes on the excursion. Here, we explore the early Aptian oceanic anoxic event as an example of a negative isotopic shift whose magnitude varies with paleolatitude in terrestrial settings. It increases (from -2.0 to -8.2 ‰) with paleolatitude (5° to 33°N) and is greater than that expected for changes in plant C isotope discrimination driven by environmental stresses (~3 ‰). Conceptually, an isotopic shift of terrestrial vegetation across paleolatitudes represents a response to its forcing mechanism and temperature. A closed system carbon cycle model constructed of five reservoirs (atmosphere, vegetation, soil, and shallow and deep oceans), and five fluxes (productivity, respiration, litter fall, atmosphere-ocean exchange, and surface-deep ocean exchange) was employed is assessment of a negative isotopic shift at 2x pre-industrial atmospheric levels (P.A.L.) for pCO2 keeping all variables constant with the exception of temperature. The model was run at 5°C increments from 5° to 40°C to simulate the effect of temperature gradients on isotopic shifts at variable latitudes, with the appropriate temperature dependent fractionations for atmosphere - sea exchange. The magnitude of the negative isotopic shift at each temperature was calculated for both terrestrial and marine organic matter. In terrestrial vegetation it changed from -4 to -5.8 ‰ with decreasing

  5. Flow excursion time scales in the advanced neutron source reactor

    SciTech Connect

    Sulfredge, C.D.

    1995-04-01

    Flow excursion transients give rise to a key thermal limit for the proposed Advanced Neutron Source (ANS) reactor because its core involves many parallel flow channels with a common pressure drop. Since one can envision certain accident scenarios in which the thermal limits set by flow excursion correlations might be exceeded for brief intervals, a key objective is to determine how long a flow excursion would take to bring about a system failure that could lead to fuel damage. The anticipated time scale for flow excursions has been examined by subdividing the process into its component phenomena: bubble nucleation and growth, deceleration of the resulting two-phase flow, and finally overcoming thermal inertia to heat up the reactor fuel plates. Models were developed to estimate the time required for each individual stage. Accident scenarios involving sudden reduction in core flow or core exit pressure have been examined, and the models compared with RELAP5 output for the ANS geometry. For a high-performance reactor like the ANS, flow excursion time scales were predicted to be in the millisecond range, so that even very brief transients might lead to fuel damage. These results should prove useful whenever one must determine the time involved in any portion of a flow excursion transient.

  6. Design of a full scale model fuel assembly for full power production reactor flow excursion experiments

    SciTech Connect

    Nash, C.A. ); Blake, J.E.; Rush, G.C. )

    1990-01-01

    A novel full scale production reactor fuel assembly model was designed and built to study thermal-hydraulic effects of postulated Savannah River Site (SRS) nuclear reactor accidents. The electrically heated model was constructed to simulate the unique annular concentric tube geometry of fuel assemblies in SRS nuclear production reactors. Several major design challenges were overcome in order to produce the prototypic geometry and thermal-hydraulic conditions. The two concentric heater tubes (total power over 6 MW and maximum heat flux of 3.5 MW/m{sup 2}) (1.1E+6 BTU/(ft{sup 2}hr)) were designed to closely simulate the thermal characteristics of SRS uranium-aluminum nuclear fuel. The paper discusses the design of the model fuel assembly, which met requirements of maintaining prototypic geometric and hydraulic characteristics, and approximate thermal similarity. The model had a cosine axial power profile and the electrical resistance was compatible with the existing power supply. The model fuel assembly was equipped with a set of instruments useful for code analysis, and durable enough to survive a number of LOCA transients. These instruments were sufficiently responsive to record the response of the fuel assembly to the imposed transient.

  7. Design of a full scale model fuel assembly for full power production reactor flow excursion experiments

    SciTech Connect

    Nash, C.A.; Blake, J.E.; Rush, G.C.

    1990-12-31

    A novel full scale production reactor fuel assembly model was designed and built to study thermal-hydraulic effects of postulated Savannah River Site (SRS) nuclear reactor accidents. The electrically heated model was constructed to simulate the unique annular concentric tube geometry of fuel assemblies in SRS nuclear production reactors. Several major design challenges were overcome in order to produce the prototypic geometry and thermal-hydraulic conditions. The two concentric heater tubes (total power over 6 MW and maximum heat flux of 3.5 MW/m{sup 2}) (1.1E+6 BTU/(ft{sup 2}hr)) were designed to closely simulate the thermal characteristics of SRS uranium-aluminum nuclear fuel. The paper discusses the design of the model fuel assembly, which met requirements of maintaining prototypic geometric and hydraulic characteristics, and approximate thermal similarity. The model had a cosine axial power profile and the electrical resistance was compatible with the existing power supply. The model fuel assembly was equipped with a set of instruments useful for code analysis, and durable enough to survive a number of LOCA transients. These instruments were sufficiently responsive to record the response of the fuel assembly to the imposed transient.

  8. Mono Lake Excursion Reviewed

    NASA Astrophysics Data System (ADS)

    Liddicoat, J. C.; Coe, R. S.

    2007-05-01

    The Mono Lake Excursion as recorded in the Mono Basin, CA, has an older part that is about negative 30 degrees inclination and about 300 degrees declination during low relative field intensity. Those paleomagnetic directions are closely followed by greater than 80 degrees positive inclination and east declination of about 100 degrees during higher relative field intensity. A path of the Virtual Geomagnetic Poles (VGPs) for the older part followed from old to young forms a large clockwise loop that reaches 35 degrees N latitude and is centered at about 35 degrees E longitude. That loop is followed by a smaller one that is counterclockwise and centered at about 70 degrees N latitude and 270 degrees E longitude (Denham & Cox, 1971; Denham, 1974; Liddicoat & Coe, 1979). The Mono Lake Excursion outside the Mono Basin in western North America is recorded as nearly the full excursion at Summer Lake, OR (Negrini et al., 1984), and as the younger portion of steep positive inclination/east declination in the Lahontan Basin, NV. The overall relative field intensity during the Mono Lake Excursion in the Lahontan Basin mirrors very closely the relative field intensity in the Mono Basin (Liddicoat, 1992, 1996; Coe & Liddicoat, 1994). Using 14C and 40Ar/39Ar dates (Kent et al., 2002) and paleoclimate and relative paleointensity records (Zimmerman et al., 2006) for the Mono Lake Excursion in the Mono Basin, it has been proposed that the Mono Lake Excursion might be older than originally believed and instead be the Laschamp Excursion at about 40,000 yrs B.P. (Guillou et al., 2004). On the contrary, we favor a younger age for the Mono Lake Excursion, about 32,000 yrs B.P., using the relative paleointensity in the Mono Basin and Lahontan Basin and 14C dates from the Lahontan Basin (Benson et al., 2002). The age of about 32,000 yrs B.P. is also in accord with the age (32,000- 34,000 yrs B.P.) reported by Channell (2006) for the Mono Lake Excursion at ODP Site 919 in the Irminger Basin

  9. A diagenetic control on the Early Triassic Smithian-Spathian carbon isotopic excursions recorded in the marine settings of the Thaynes Group (Utah, USA).

    PubMed

    Thomazo, C; Vennin, E; Brayard, A; Bour, I; Mathieu, O; Elmeknassi, S; Olivier, N; Escarguel, G; Bylund, K G; Jenks, J; Stephen, D A; Fara, E

    2016-05-01

    In the aftermath of the end-Permian mass extinction, Early Triassic sediments record some of the largest Phanerozoic carbon isotopic excursions. Among them, a global Smithian-negative carbonate carbon isotope excursion has been identified, followed by an abrupt increase across the Smithian-Spathian boundary (SSB; ~250.8 Myr ago). This chemostratigraphic evolution is associated with palaeontological evidence that indicate a major collapse of terrestrial and marine ecosystems during the Late Smithian. It is commonly assumed that Smithian and Spathian isotopic variations are intimately linked to major perturbations in the exogenic carbon reservoir. We present paired carbon isotopes measurements from the Thaynes Group (Utah, USA) to evaluate the extent to which the Early Triassic isotopic perturbations reflect changes in the exogenic carbon cycle. The δ(13) Ccarb variations obtained here reproduce the known Smithian δ(13) Ccarb -negative excursion. However, the δ(13) C signal of the bulk organic matter is invariant across the SSB and variations in the δ(34) S signal of sedimentary sulphides are interpreted here to reflect the intensity of sediment remobilization. We argue that Middle to Late Smithian δ(13) Ccarb signal in the shallow marine environments of the Thaynes Group does not reflect secular evolution of the exogenic carbon cycle but rather physicochemical conditions at the sediment-water interface leading to authigenic carbonate formation during early diagenetic processes. PMID:26842810

  10. Penrose tilings as model sets

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Maleev, A. V.

    2015-11-01

    The Baake construction, based on generating a set of vertices of Penrose tilings as a model set, is refined. An algorithm and a corresponding computer program for constructing an uncountable set of locally indistinguishable Penrose tilings are developed proceeding from this refined construction. Based on an analysis of the parameters of tiling vertices, 62 versions of rhomb combinations at the tiling center are determined. The combinatorial structure of Penrose tiling worms is established. A concept of flip transformations of tilings is introduced that makes it possible to construct Penrose tilings that cannot be implemented in the Baake construction.

  11. The Iceland Basin excursion: Age, duration, and excursion field geometry

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.

    2014-12-01

    Iceland Basin geomagnetic excursion coincided with the marine isotope stage (MIS) 6/7 boundary. The age and duration of the excursion, at seven North Atlantic sites with sufficient isotope data, are estimated by matching marine isotope stage (MIS) 7a-7c to a calibrated template. Two criteria for defining the excursion, virtual geomagnetic pole (VGP) latitudes <0° and <40°N, yield excursion durations of 1-4 and 2-5 kyr, respectively. The midpoints of the excursion are in the 189-192 ka range, with a mean of ˜190.2 ka. Although component magnetization directions are generally well defined, rapid changes in field direction during a time of low field intensity are not adequately recorded. During the excursion, VGPs transit southward over Africa and the South Atlantic, reach high southern latitudes at the culmination of the excursion, with partial recovery in relative paleointensity (RPI), and then track northward through the western Pacific. The high southern latitude VGPs, and the recovery in RPI, imply that the Earth's main axial dipole reversed polarity during the excursion, if only for ˜1 kyr; implying that excursions can be manifested globally and are important in millennial-scale stratigraphy. VGP clustering in the South Atlantic and NW Pacific roughly coincide with maxima in the vertical-downward component of the modern nondipole (ND) field determined at the Earth's surface, which implies that the ND field became dominant as the geocentric dipole field weakened during the excursion, and also that the ND field configuration is long-lived on multimillennial timescales.

  12. Bounded excursion stable gravastars and black holes

    SciTech Connect

    Rocha, P; Da Silva, M F; Wang, Anzhong; Santos, N O E-mail: yasuda@on.br E-mail: mfasnic@gmail.com E-mail: anzhong_wang@baylor.edu

    2008-06-15

    Dynamical models of prototype gravastars were constructed in order to study their stability. The models are the Visser-Wiltshire three-layer gravastars, in which an infinitely thin spherical shell of stiff fluid divides the whole spacetime into two regions, where the internal region is de Sitter, and the external one is Schwarzschild. It is found that in some cases the models represent the 'bounded excursion' stable gravastars, where the thin shell is oscillating between two finite radii, while in other cases they collapse until the formation of black holes occurs. In the phase space, the region for the 'bounded excursion' gravastars is very small in comparison to that of black holes, but not empty. Therefore, although the possibility of the existence of gravastars cannot be excluded from such dynamical models, our results indicate that, even if gravastars do indeed exist, that does not exclude the possibility of the existence of black holes.

  13. An Excursion in Applied Mathematics.

    ERIC Educational Resources Information Center

    von Kaenel, Pierre A.

    1981-01-01

    An excursion in applied mathematics is detailed in a lesson deemed well-suited for the high school student or undergraduate. The problem focuses on an experimental missile guidance system simulated in the laboratory. (MP)

  14. Compliant bipedal model with the center of pressure excursion associated with oscillatory behavior of the center of mass reproduces the human gait dynamics.

    PubMed

    Jung, Chang Keun; Park, Sukyung

    2014-01-01

    Although the compliant bipedal model could reproduce qualitative ground reaction force (GRF) of human walking, the model with a fixed pivot showed overestimations in stance leg rotation and the ratio of horizontal to vertical GRF. The human walking data showed a continuous forward progression of the center of pressure (CoP) during the stance phase and the suspension of the CoP near the forefoot before the onset of step transition. To better describe human gait dynamics with a minimal expense of model complexity, we proposed a compliant bipedal model with the accelerated pivot which associated the CoP excursion with the oscillatory behavior of the center of mass (CoM) with the existing simulation parameter and leg stiffness. Owing to the pivot acceleration defined to emulate human CoP profile, the arrival of the CoP at the limit of the stance foot over the single stance duration initiated the step-to-step transition. The proposed model showed an improved match of walking data. As the forward motion of CoM during single stance was partly accounted by forward pivot translation, the previously overestimated rotation of the stance leg was reduced and the corresponding horizontal GRF became closer to human data. The walking solutions of the model ranged over higher speed ranges (~1.7 m/s) than those of the fixed pivoted compliant bipedal model (~1.5m/s) and exhibited other gait parameters, such as touchdown angle, step length and step frequency, comparable to the experimental observations. The good matches between the model and experimental GRF data imply that the continuous pivot acceleration associated with CoM oscillatory behavior could serve as a useful framework of bipedal model. PMID:24161797

  15. Dynamics of the Laschamp geomagnetic excursion from Black Sea sediments

    NASA Astrophysics Data System (ADS)

    Nowaczyk, N. R.; Arz, H. W.; Frank, U.; Kind, J.; Plessen, B.

    2012-10-01

    Investigated sediment cores from the southeastern Black Sea provide a high-resolution record from mid latitudes of the Laschamp geomagnetic polarity excursion. Age constraints are provided by 16 AMS 14C ages, identification of the Campanian Ignimbrite tephra (39.28±0.11 ka), and by detailed tuning of sedimentologic parameters of the Black Sea sediments to the oxygen isotope record from the Greenland NGRIP ice core. According to the derived age model, virtual geomagnetic pole (VGP) positions during the Laschamp excursion persisted in Antarctica for an estimated 440 yr, making the Laschamp excursion a short-lived event with fully reversed polarity directions. The reversed phase, centred at 41.0 ka, is associated with a significant field intensity recovery to 20% of the preceding strong field maximum at ˜50 ka. Recorded field reversals of the Laschamp excursion, lasting only an estimated ˜250 yr, are characterized by low relative paleointensities (5% relative to 50 ka). The central, fully reversed phase of the Laschamp excursion is bracketed by VGP excursions to the Sargasso Sea (˜41.9 ka) and to the Labrador Sea (˜39.6 ka). Paleomagnetic results from the Black Sea are in excellent agreement with VGP data from the French type locality which facilitates the chronological ordering of the non-superposed lavas that crop out at Laschamp-Olby. In addition, VGPs between 34 and 35 ka reach low northerly to equatorial latitudes during a clockwise loop, inferred to be the Mono lake excursion.

  16. Identification and dating of the Mono Lake excursion in lava flows from the Canary islands.

    NASA Astrophysics Data System (ADS)

    Guillou, Hervé; Laj, Carlo; Carracedo, Juan-Carlos; Kissel, Catherine; Nomade, Sebastien; Perez-Torrado, Francisco; Wandres, Camille

    2010-05-01

    The Mono Lake geomagnetic excursion was defined from the study of lacustrine sections from Western North America [Denham, 1974; Liddicoat et al., 1979]. The proposed age for this excursion reported in the literature changed in time since the first observation and a debate was even very recently opened about the reliability of the dating at the original section at Wilson Creek. In ice cores, a peak in the production of cosmogenic isotopes is clearly observed about 7 ka after the peak associated to the Laschamp excursion. This younger peak, attributed to the Mono Lake occurs between the millennial climatic cycles 7 and 6 (Dansgaard-Oeschger cycles), around 34 kyr in the most recent Greenland ice age model. In addition, in other places, this excursion is described by an intensity low with only very rarely an associated directional shift, questioning the global character of this excursion. We present a coupled paleomagnetic and dating investigation conducted on four different lavas from the island of Tenerife (Spain) on the basis of preliminary K/Ar dating. From a paleomagnetic point of view, one of these sites is characterized by a direction largely deviated from the one calculated on the basis of an axial geocentric dipole field. The paleointensity values, determined using Thellier and Thellier method and the PICRIT03 set of criteria, is very low, about 8 µT. Two other sites are slightly deviated from the GAD value, in particular with lower inclinations. Paleointensity determinations from these lavas do not yet have a statistical significance and need to be completed but the first results indicate a value around 20 µT. Finally, the last site has a direction consistent with the GAD values and no reliable paleointensity determinations could be obtained so far. The preliminary K/Ar dating are now completed by Ar/Ar dating and their combination yield an average age of about 32 ka ± 2 ka for the four outcrops, not statistically distinguishable one from another. This

  17. Geomagnetic excursions and climate change

    NASA Technical Reports Server (NTRS)

    Rampino, M. R.

    1983-01-01

    Rampino argues that although Kent (1982) demonstrated that the intensity of natural remanent magnetism (NRM) in deep-sea sediments is sensitive to changes in sediment type, and hence is not an accurate indicator of the true strength of the geomagnetic field, it does not offer an alternative explanation for the proposed connections between excursions, climate, and orbital parameters. Kent replies by illustrating some of the problems associated with geomagnetic excursions by considering the record of proposed excursions in a single critical core. The large departure from an axial dipole field direction seen in a part of the sample is probably due to a distorted record; the drawing and storage of the sample, which is described, could easily have led to disturbance and distortion of the record.

  18. Sampling plan optimization for detection of lithography and etch CD process excursions

    NASA Astrophysics Data System (ADS)

    Elliott, Richard C.; Nurani, Raman K.; Lee, Sung Jin; Ortiz, Luis G.; Preil, Moshe E.; Shanthikumar, J. G.; Riley, Trina; Goodwin, Greg A.

    2000-06-01

    Effective sample planning requires a careful combination of statistical analysis and lithography engineering. In this paper, we present a complete sample planning methodology including baseline process characterization, determination of the dominant excursion mechanisms, and selection of sampling plans and control procedures to effectively detect the yield- limiting excursions with a minimum of added cost. We discuss the results of our novel method in identifying critical dimension (CD) process excursions and present several examples of poly gate Photo and Etch CD excursion signatures. Using these results in a Sample Planning model, we determine the optimal sample plan and statistical process control (SPC) chart metrics and limits for detecting these excursions. The key observations are that there are many different yield- limiting excursion signatures in photo and etch, and that a given photo excursion signature turns into a different excursion signature at etch with different yield and performance impact. In particular, field-to-field variance excursions are shown to have a significant impact on yield. We show how current sampling plan and monitoring schemes miss these excursions and suggest an improved procedure for effective detection of CD process excursions.

  19. Excursions in technology policy

    NASA Technical Reports Server (NTRS)

    Archibald, Robert B.

    1995-01-01

    This technical report presents a summary of three distinct projects: (1) Measuring economic benefits; (2) Evaluating the SBIR program; and (3) A model for evaluating changes in support for science and technology. the first project deals with the Technology Applications Group (TAG) at NASA Langley Research Center. The mission of TAG is to assist firms interested in commercializing technologies. TAG is a relatively new group as is the emphasis on technology commercialization for NASA. One problem faced by TAG and similar groups at other centers is measuring their effectiveness. The first project this summer, a paper entitled, 'Measuring the Economic Benefits of Technology Transfer from a National Laboratory: A Primer,' focused on this measurement problem. We found that the existing studies of the impact of technology transfer on the economy were conceptually flawed. The 'primer' outlines the appropriate theoretical framework for measuring the economic benefits of technology transfer. The second project discusses, one of the programs of TAG, the Small Business Innovation Research (SBIR) program. This program has led to over 400 contracts with Small Business since its inception in 1985. The program has never been evaluated. Crucial questions such as those about the extent of commercial successes from the contracts need to be answered. This summer we designed and implemented a performance evaluation survey instrument. The analysis of the data will take place in the fall. The discussion of the third project focuses on a model for evaluating changes in support for science and technology. At present several powerful forces are combining to change the environment for science and technology policy. The end of the cold war eliminated the rationale for federal support for many projects. The new- found Congressional conviction to balance the budget without tax increases combined with demographic changes which automatically increase spending for some politically popular programs

  20. Excursion detection using leveling data

    NASA Astrophysics Data System (ADS)

    Kim, MinGyu; Ju, Jaewuk; Habets, Boris; Erley, Georg; Bellmann, Enrico; Kim, Seop

    2016-03-01

    Wafer leveling data are usually used inside the exposure tool for ensuring good focus, then discarded. This paper describes the implementation of a monitoring and analysis solution to download these data automatically, together with the correction profiles applied by the scanner. The resulting height maps and focus residuals form the basis for monitoring metrics tailored to catching tool and process drifts and excursions in a high-volume manufacturing (HVM) environment. In this paper, we present four six cases to highlight the potential of the method: wafer edge monitoring, chuck drift monitoring, correlations between focus residuals and overlay errors, and pre-process monitoring by chuck fingerprint removal.

  1. Buoyancy-driven flow excursions in fuel assemblies

    SciTech Connect

    Laurinat, J.E.; Paul, P.K.; Menna, J.D.

    1995-09-01

    A power limit criterion was developed for a postulated Loss of Pumping Accident (LOPA) in one of the recently shut down heavy water production reactors at the Savannah River Site. These reactors were cooled by recirculating heavy water moderator downward through channels in cylindrical fuel tubes. Powers were limited to safeguard against a flow excursion in one of more of these parallel channels. During-full-power operation, limits safeguarded against a boiling flow excursion. At low flow rates, during the addition of emergency cooling water, buoyant forces reverse the flow in one of the coolant channels before boiling occurs. As power increased beyond the point of flow reversal, the maximum wall temperature approaches the fluid saturation temperature, and a thermal excursion occurs. The power limit criterion for low flow rates was the onset of flow reversal. To determine conditions for flow reversal, tests were performed in a mock-up of a fuel assembly that contained two electrically heated concentric tubes surrounded by three flow channels. These tests were modeled using a finite difference thermal-hydraulic code. According to code calculations, flow reversed in the outer flow channel before the maximum wall temperature reached the local fluid saturation temperature. Thermal excursions occurred when the maximum wall temperature approximately equaled the saturation temperature. For a postulated LOPA, the flow reversal criterion for emergency cooling water addition was more limiting than the boiling excursion criterion for full power operation. This criterion limited powers to 37% of the limiting power for previous long-term reactor operations.

  2. Buoyancy-driven flow excursions in fuel assemblies. Revision 1

    SciTech Connect

    Laurinat, J.E.; Paul, P.K.; Menna, J.D.

    1995-07-01

    A power limit criterion was developed for a postulated Loss of Pumping Accident (LOPA) in one of the recently shut down heavy water production reactors at the Savannah River Site. These reactors were cooled by recirculating heavy water moderator downward through channels in cylindrical fuel tubes. Powers were limited to safeguard against a flow excursion in one or more of these parallel channels. During full-power operation, limits safeguarded against a boiling flow excursion. At low flow rates, during the addition of emergency cooling water, buoyant forces reverse the flow in one of the coolant channels before boiling occurs. As power increases beyond the point of flow reversal, the maximum wall temperature approaches the fluid saturation temperature, and a thermal excursion occurs. The power limit criterion for low flow rates was the onset of flow reversal. To determine conditions for flow reversal, tests were performed in a mock-up of a fuel assembly that contained two electrically heated concentric tubes surrounded by three flow channels. These tests were modeled using a finite difference thermal-hydraulic code. According to code calculations, flow reversed in the outer flow channel before the maximum wall temperature reached the local fluid saturation temperature. Thermal excursions occurred when the maximum wall temperature approximately equaled the saturation temperature. For a postulated LOPA, the flow reversal criterion for emergency cooling water addition was more limiting than the boiling excursion criterion for full power operation. This criterion limited powers to 37% of the limiting power for previous long-term reactor operations.

  3. Rough set models of Physarum machines

    NASA Astrophysics Data System (ADS)

    Pancerz, Krzysztof; Schumann, Andrew

    2015-04-01

    In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.

  4. Global geomagnetic field mapping - from secular variation to geomagnetic excursions

    NASA Astrophysics Data System (ADS)

    Panovska, Sanja; Constable, Catherine

    2015-04-01

    The main source of the geomagnetic field is a self-sustaining dynamo produced by fluid motions in Earth's liquid outer core. We study the spatial and temporal changes in the internal magnetic field by mapping the time-varying geomagnetic field over the past 100 thousand years. This is accomplished using a new global data set of paleomagnetic records drawn from high accumulation rate sediments and from volcanic rocks spanning the past 100 thousand years (Late Pleistocene). Sediment data comprises 105 declination, 117 inclination and 150 relative paleointensity (RPI) records, mainly concentrated in northern mid-latitudes, although some are available in the southern hemisphere. Northern Atlantic and Western Pacific are regions with high concentrations of data. The number of available volcanic/archeomagnetic data is comparitively small on the global scale, especially in the Southern hemisphere. Temporal distributions show that the number of data increases toward more recent times with a good coverage for the past 50 ka. Laschamp excursion (41 ka BP) is well represented for both directional and intensity data. The significant increase in data compared to previous compilations results in an improvement over current geomagnetic field models covering these timescales. Robust aspects of individual sediment records are successfully captured by smoothing spline modeling allowing an estimate of random uncertainties present in the records. This reveals a wide range of fidelities across the sediment magnetic records. Median uncertainties are: 17° for declination (range, 1° to 113°), 6° for inclination (1° to 50°) and 0.4 for standardized relative paleointensity (0.02 to 1.4). The median temporal resolution of the records defined by the smoothing time is 400 years (range, 50 years to about 14 kyr). Using these data, a global, time-varying, geomagnetic field model is constructed covering the past 100 thousand years. The modeling directly uses relative forms of sediment

  5. An investigation of the dynamic relationship between navicular drop and first metatarsophalangeal joint dorsal excursion

    PubMed Central

    Griffin, Nicole L; Miller, Charlotte; Schmitt, Daniel; D'Août, Kristiaan

    2013-01-01

    The modern human foot is a complex biomechanical structure that must act both as a shock absorber and as a propulsive strut during the stance phase of gait. Understanding the ways in which foot segments interact can illuminate the mechanics of foot function in healthy and pathological humans. It has been proposed that increased values of medial longitudinal arch deformation can limit metatarsophalangeal joint excursion via tension in the plantar aponeurosis. However, this model has not been tested directly in a dynamic setting. In this study, we tested the hypothesis that during the stance phase, subtalar pronation (stretching of the plantar aponeurosis and subsequent lowering of the medial longitudinal arch) will negatively affect the amount of first metatarsophalangeal joint excursion occurring at push-off. Vertical descent of the navicular (a proxy for subtalar pronation) and first metatarsophalangeal joint dorsal excursion were measured during steady locomotion over a flat substrate on a novel sample consisting of asymptomatic adult males and females, many of whom are habitually unshod. Least-squares regression analyses indicated that, contrary to the hypothesis, navicular drop did not explain a significant amount of variation in first metatarsophalangeal joint dorsal excursion. These results suggest that, in an asymptomatic subject, the plantar aponeurosis and the associated foot bones can function effectively within the normal range of subtalar pronation that takes place during walking gait. From a clinical standpoint, this study highlights the need for investigating the in vivo kinematic relationship between subtalar pronation and metatarsophalangeal joint dorsiflexion in symptomatic populations, and also the need to explore other factors that may affect the kinematics of asymptomatic feet. PMID:23600634

  6. Spin foam models as energetic causal sets

    NASA Astrophysics Data System (ADS)

    Cortês, Marina; Smolin, Lee

    2016-04-01

    Energetic causal sets are causal sets endowed by a flow of energy-momentum between causally related events. These incorporate a novel mechanism for the emergence of space-time from causal relations [M. Cortês and L. Smolin, Phys. Rev. D 90, 084007 (2014); Phys. Rev. D 90, 044035 (2014)]. Here we construct a spin foam model which is also an energetic causal set model. This model is closely related to the model introduced in parallel by Wolfgang Wieland in [Classical Quantum Gravity 32, 015016 (2015)]. What makes a spin foam model also an energetic causal set is Wieland's identification of new degrees of freedom analogous to momenta, conserved at events (or four-simplices), whose norms are not mass, but the volume of tetrahedra. This realizes the torsion constraints, which are missing in previous spin foam models, and are needed to relate the connection dynamics to those of the metric, as in general relativity. This identification makes it possible to apply the new mechanism for the emergence of space-time to a spin foam model. Our formulation also makes use of Markopoulou's causal formulation of spin foams [arXiv:gr-qc/9704013]. These are generated by evolving spin networks with dual Pachner moves. This endows the spin foam history with causal structure given by a partial ordering of the events which are dual to four-simplices.

  7. Framing Learning Conditions in Geography Excursions

    ERIC Educational Resources Information Center

    Jonasson, Mikael

    2011-01-01

    The aim of this paper is to investigate and frame some learning conditions involved in the practice of geographical excursions. The empirical material from this study comes from several excursions made by students in human geography and an ethnomethodological approach through participant observation is used. The study is informed by theories from…

  8. Does the Shuram δ13C excursion record Ediacaran oxygenation?

    NASA Astrophysics Data System (ADS)

    Husson, J. M.; Maloof, A. C.; Schoene, B.; Higgins, J. A.

    2013-12-01

    The most negative carbon isotope excursion in Earth history is found in carbonate rocks of the Ediacaran Period (635-542 Ma). Known colloquially as the the 'Shuram' excursion, workers have long noted its tantalizing, broad concordance with the rise of abundant macro-scale fossils in the rock record, variously interpreted as animals, giant protists, macro-algae and lichen, and known as the 'Ediacaran Biota.' Thus, the Shuram excursion has been interpreted by many in the context of a dramatically changing redox state of the Ediacaran oceans - e.g., a result of methane cycling in a low O2 atmosphere, the final destruction of a large pool of recalcitrant dissolved organic carbon (DOC), and the step-wise oxidation of the Ediacaran oceans. More recently, diagenetic interpretations of the Shuram excursion - e.g. sedimentary in-growth of very δ13C depleted authigenic carbonates, meteoric alteration of Ediacaran carbonates, late-stage burial diagenesis - have challenged the various Ediacaran redox models. A rigorous geologic context is required to discriminate between these explanatory models, and determine whether the Shuram excursion can be used to evaluate terminal Neoproterozoic oxygenation. Here, we present chemo-stratigraphic data (δ13C, δ18O, δ44/42Ca and redox sensitive trace element abundances) from 12 measured sections of the Ediacaran-aged Wonoka Formation (Fm.) of South Australia that require a syn-depositional age for the extraordinary range of δ13C values (-12 to +4‰) observed in the formation. In some locations, the Wonoka Fm. is ~700 meters (m) of mixed shelf limestones and siliclastics that record the full 16 ‰ δ13C excursion in a remarkably consistent fashion across 100s of square kilometers of basin area. Fabric-altering diagenesis, where present, occurs at the sub-meter vertical scale, only results in sub-permil offsets in δ13C and cannot be used to explain the full δ13C excursion. In other places, the Wonoka Fm. is host to deep (1 km

  9. Soybean canopy reflectance modeling data sets

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Biehl, L. L.; Daughtry, C. S. T.

    1984-01-01

    Numerous mathematical models of the interaction of radiation with vegetation canopies have been developed over the last two decades. However, data with which to exercise and validate these models are scarce. During three days in the summer of 1980, experiments are conducted with the objective of gaining insight about the effects of solar illumination and view angles on soybean canopy reflectance. In concert with these experiment, extensive measurements of the soybean canopies are obtained. This document is a compilation of the bidirectional reflectance factors, agronomic, characteristics, canopy geometry, and leaf, stem, and pod optical properties of the soybean canopies. These data sets should be suitable for use with most vegetation canopy reflectance models.

  10. Human risk factors associated with pilots in runway excursions.

    PubMed

    Chang, Yu-Hern; Yang, Hui-Hua; Hsiao, Yu-Jung

    2016-09-01

    A breakdown analysis of civil aviation accidents worldwide indicates that the occurrence of runway excursions represents the largest portion among all aviation occurrence categories. This study examines the human risk factors associated with pilots in runway excursions, by applying a SHELLO model to categorize the human risk factors and to evaluate the importance based on the opinions of 145 airline pilots. This study integrates aviation management level expert opinions on relative weighting and improvement-achievability in order to develop four kinds of priority risk management strategies for airline pilots to reduce runway excursions. The empirical study based on experts' evaluation suggests that the most important dimension is the liveware/pilot's core ability. From the perspective of front-line pilots, the most important risk factors are the environment, wet/containment runways, and weather issues like rain/thunderstorms. Finally, this study develops practical strategies for helping management authorities to improve major operational and managerial weaknesses so as to reduce the human risks related to runway excursions. PMID:27344128

  11. Compensation for large thorax excursions in EIT imaging.

    PubMed

    Schullcke, B; Krueger-Ziolek, S; Gong, B; Mueller-Lisse, U; Moeller, K

    2016-09-01

    Besides the application of EIT in the intensive care unit it has recently also been used in spontaneously breathing patients suffering from asthma bronchiole, cystic fibrosis (CF) or chronic obstructive pulmonary disease (COPD). In these cases large thorax excursions during deep inspiration, e.g. during lung function testing, lead to artifacts in the reconstructed images. In this paper we introduce a new approach to compensate for image artifacts resulting from excursion induced changes in boundary voltages. It is shown in a simulation study that boundary voltage change due to thorax excursion on a homogeneous model can be used to modify the measured voltages and thus reduce the impact of thorax excursion on the reconstructed images. The applicability of the method on human subjects is demonstrated utilizing a motion-tracking-system. The proposed technique leads to fewer artifacts in the reconstructed images and improves image quality without substantial increase in computational effort, making the approach suitable for real-time imaging of lung ventilation. This might help to establish EIT as a supplemental tool for lung function tests in spontaneously breathing patients to support clinicians in diagnosis and monitoring of disease progression. PMID:27531053

  12. Deep Reconstruction Models for Image Set Classification.

    PubMed

    Hayat, Munawar; Bennamoun, Mohammed; An, Senjian

    2015-04-01

    Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289

  13. Bayesian nonparametric models for ranked set sampling.

    PubMed

    Gemayel, Nader; Stasny, Elizabeth A; Wolfe, Douglas A

    2015-04-01

    Ranked set sampling (RSS) is a data collection technique that combines measurement with judgment ranking for statistical inference. This paper lays out a formal and natural Bayesian framework for RSS that is analogous to its frequentist justification, and that does not require the assumption of perfect ranking or use of any imperfect ranking models. Prior beliefs about the judgment order statistic distributions and their interdependence are embodied by a nonparametric prior distribution. Posterior inference is carried out by means of Markov chain Monte Carlo techniques, and yields estimators of the judgment order statistic distributions (and of functionals of those distributions). PMID:25326663

  14. Scaling properties of excursions in heartbeat dynamics

    NASA Astrophysics Data System (ADS)

    Reyes-Ramírez, I.; Guzmán-Vargas, L.

    2010-02-01

    In this work we study the excursions, defined as the number of beats to return to a local mean value, in heartbeat interval time series from healthy subjects and patients with congestive heart failure (CHF). First, we apply the segmentation procedure proposed by Bernaola-Galván et al. (Phys. Rev. Lett., 87 (2001) 168105), to nonstationary heartbeat time series to identify stationary segments with a local mean value. Next, we identify local excursions around the local mean value and construct the distributions to analyze the time organization and memory in the excursions sequences from the whole time series. We find that the cumulative distributions of excursions are consistent with a stretched exponential function given by g(x)~e-aτb, with a=1.09±0.15 (mean value±SD) and b=0.91±0.11 for healthy subjects and a=1.31±0.23 and b=0.77±0.13 for CHF patients. The cumulative conditional probability G(τ|τ0) is considered to evaluate if τ depends on a given interval τ0, that is, to evaluate the memory effect in excursion sequences. We find that the memory in excursions sequences under healthy conditions is characterized by the presence of clusters related to the fact that large excursions are more likely to be followed by large ones whereas for CHF data we do not observe this behavior. The presence of correlations in healthy data is confirmed by means of the detrended fluctuation analysis (DFA) while for CHF records the scaling exponent is characterized by a crossover, indicating that for short scales the sequences resemble uncorrelated noise.

  15. Gravitational Lens Modeling with Basis Sets

    NASA Astrophysics Data System (ADS)

    Birrer, Simon; Amara, Adam; Refregier, Alexandre

    2015-11-01

    We present a strong lensing modeling technique based on versatile basis sets for the lens and source planes. Our method uses high performance Monte Carlo algorithms, allows for an adaptive build up of complexity, and bridges the gap between parametric and pixel based reconstruction methods. We apply our method to a Hubble Space Telescope image of the strong lens system RX J1131-1231 and show that our method finds a reliable solution and is able to detect substructure in the lens and source planes simultaneously. Using mock data, we show that our method is sensitive to sub-clumps with masses four orders of magnitude smaller than the main lens, which corresponds to about {10}8{M}⊙ , without prior knowledge of the position and mass of the sub-clump. The modeling approach is flexible and maximizes automation to facilitate the analysis of the large number of strong lensing systems expected in upcoming wide field surveys. The resulting search for dark sub-clumps in these systems, without mass-to-light priors, offers promise for probing physics beyond the standard model in the dark matter sector.

  16. On the bound of first excursion probability

    NASA Technical Reports Server (NTRS)

    Yang, J. N.

    1969-01-01

    Method has been developed to improve the lower bound of the first excursion probability that can apply to the problem with either constant or time-dependent barriers. The method requires knowledge of the joint density function of the random process at two arbitrary instants.

  17. Multiple Palaeoproterozoic carbon burial episodes and excursions

    NASA Astrophysics Data System (ADS)

    Martin, A. P.; Prave, A. R.; Condon, D. J.; Lepland, A.; Fallick, A. E.; Romashkin, A. E.; Medvedev, P. V.; Rychanchik, D. V.

    2015-08-01

    Organic-rich rocks (averaging 2-5% total organic carbon) and positive carbonate-carbon isotope excursions (δ13C > + 5 ‰ and locally much higher, i.e. the Lomagundi-Jatuli Event) are hallmark features of Palaeoproterozoic successions and are assumed to archive a global event of unique environmental conditions following the c. 2.3 Ga Great Oxidation Event. Here we combine new and published geochronology that shows that the main Palaeoproterozoic carbon burial episodes (CBEs) preserved in Russia, Gabon and Australia were temporally discrete depositional events between c. 2.10 and 1.85 Ga. In northwest Russia we can also show that timing of the termination of the Lomagundi-Jatuli Event may have differed by up to 50 Ma between localities, and that Ni mineralisation occurred at c. 1920 Ma. Further, CBEs have traits in common with Mesozoic Oceanic Anoxic Events (OAEs); both are exceptionally organic-rich relative to encasing strata, associated with contemporaneous igneous activity and marked by organic carbon isotope profiles that exhibit a stepped decrease followed by a stabilisation period and recovery. Although CBE strata are thicker and of greater duration than OAEs (100 s of metres versus metres, ∼106 years versus ∼105 years), their shared characteristics hint at a commonality of cause(s) and feedbacks. This suggests that CBEs represent processes that can be either basin-specific or global in nature and a combination of circumstances that are not unique to the Palaeoproterozoic. Our findings urge circumspection and re-consideration of models that assume CBEs are a Deep Time singularity.

  18. High-resolution palaeomagnetic records of the Laschamp geomagnetic excursion from the Blake Ridge

    NASA Astrophysics Data System (ADS)

    Mac Niocaill, C.; Bourne, M. D.; Thomas, A. L.; Henderson, G. M.

    2013-05-01

    Geomagnetic excursions are brief (1000s of years) deviations in geomagnetic field behaviour from that expected during 'normal secular' variation. The Laschamp excursion (~41 ka) was a global deviation in geomagnetic field behaviour. Previously published records suggest rapid changes in field direction and a concurrent substantial decrease in field intensity. Accurate dating of excursions and determinations of their durations from multiple locations is vital to our understanding to global field behaviour during these deviations. We present here high-resolution palaeomagnetic records of the Laschamp excursion obtained from two Ocean Drilling Program (ODP) Sites 1061 and 1062 on the Blake-Bahama Outer Ridge (ODP Leg 172) Relatively high sedimentation rates (~30-40 cm kyr-1) at these locations allow the determination of transitional field behaviour during the excursion. Despite their advantages, sedimentary records can be limited by the potential for unrecognized variations in sedimentation rates between widely spaced age-constrained boundaries. Rather than assuming a constant sedimentation rate between assigned age tie-points, we employ measurements of the concentration of 230Thxs in the sediment. 230Thxs is a constant flux proxy and may be used to assess variations in the sedimentation rates through the core sections of interest. Following this approach, we present a new age model for Site 1061 that allows us to better determine the temporal behaviour of the Laschamp excursion with greater accuracy and known uncertainty. Palaeomagnetic measurements of discrete samples from four cores reveal a single excursional feature, across an interval of 30 cm, associated with a broader palaeointensity low. The excursion is characterised by rapid transitions (less than 200 years) between a stable normal polarity and a partially-reversed, polarity. Peaks in inclination either side of the directional excursion indicate periods of time when the local field is dominated by vertical

  19. ODP Site 1063 (Bermuda Rise) revisited: Oxygen isotopes, excursions and paleointensity in the Brunhes Chron

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.; Hodell, D. A.; Curtis, J. H.

    2012-02-01

    An age model for the Brunhes Chron of Ocean Drilling Program (ODP) Site 1063 (Bermuda Rise) is constructed by tandem correlation of oxygen isotope and relative paleointensity data to calibrated reference templates. Four intervals in the Brunhes Chron where paleomagnetic inclinations are negative for both u-channel samples and discrete samples are correlated to the following magnetic excursions with Site 1063 ages in brackets: Laschamp (41 ka), Blake (116 ka), Iceland Basin (190 ka), Pringle Falls (239 ka). These ages are consistent with current age estimates for three of these excursions, but not for "Pringle Falls" which has an apparent age older than a recently published estimate by ˜28 kyr. For each of these excursions (termed Category 1 excursions), virtual geomagnetic poles (VGPs) reach high southerly latitudes implying paired polarity reversals of the Earth's main dipole field, that apparently occurred in a brief time span (<2 kyr in each case), several times shorter than the apparent duration of regular polarity transitions. In addition, several intervals of low paleomagnetic inclination (low and negative in one case) are observed both in u-channel and discrete samples at ˜318 ka (MIS 9), ˜412 ka (MIS 11) and in the 500-600 ka interval (MIS 14-15). These "Category 2" excursions may constitute inadequately recorded (Category 1) excursions, or high amplitude secular variation.

  20. Systematic Behavior of the Non-dipole Magnetic Field during the 32 ka Mono Lake Excursion

    NASA Astrophysics Data System (ADS)

    Negrini, R. M.; McCuan, D.; Cassata, W. S.; Channell, J. E.; Verosub, K. L.; Liddicoat, J. C.; Knott, J. R.; Coe, R. S.; Benson, L. V.; Sarna-Wojcicki, A.; Lund, S.; Horton, R.; Lopez, J.

    2012-12-01

    Paleomagnetic excursions are enigmatic phenomena that reveal geodynamo behavior in its transitional state and provide important refinements in age control for the late Pleistocene, a critical time period for the study of paleoclimate and human evolution. We report here on two widely separated, unusually detailed records of the Mono Lake excursion (MLE) from sedimentary sequences dated at 32 ka. One of the records is from Summer Lake, Oregon. The vector components of this new record faithfully reproduce the principle features of the MLE as recorded at the type localities around Mono Lake, CA, though with greater detail and higher amplitude. Radiocarbon dates on bulk organics in the Summer Lake record confirm the 32 ka age of the MLE. The other record is from the marine Irminger Basin off of eastern Greenland and is based on the measurement of discrete samples rather than u-channels. The associated VGP paths of the two records strongly suggest systematic field behavior that includes three loci of nondipole flux whose relative dominance oscillates through time. The staggered sequence followed by the two paths through each flux locus further suggests that both the demise and return of the main field floods zonally during the excursion. The composite path is also compatible with the VGPs of a 32 ka set of lavas from New Zealand and, notably, it does not include VGPs associated with the 40 ka Laschamp excursion. This confirms that these two excursions are distinct events and, more specifically, shows that it is the 32 ka Mono Lake excursion that is recorded in the sediments surrounding Mono Lake rather than the ~40 ka Laschamp excursion.

  1. Isukasia area: Regional geological setting (includes excursion guide)

    NASA Technical Reports Server (NTRS)

    Nutman, A. P.; Rosing, M.

    1986-01-01

    A brief account of the geology of the Isukasis area is given and is biased toward the main theme of the itinerary for the area: What has been established about the protoliths of the early Archean rocks of the area - the Isua supracrustal belt and the Amitsoq gneisses? The area's long and complex tectonometamorphic history of events can be divided into episodes using a combination of dike chronology, isotopic, and petrological studies. The earliest dikes, the ca 3700 Ma Inaluk dikes, intrude the earliest (tonalitic) components of the Amitsoq gneisses but are themselves cut up by the injection of the younger (granitic and pegmatitic) phases of the Amitsoq gneisses of the area. The areas of low late Archean deformation, strongly deformed early Archean mafic rocks have coarse grained metamorphic segregations and are cut by virtually undeformed mid-Archean Tarssartoq (Ameralik) dikes devoid of metamorphic segregations. The shows that the area was affected by regional amphibolite facies metamorphism in the early Archean. Late Archean and Proterozoic metamorphic imprints are marked to very strong in the area. Much of the early Archean gneiss complex was already highly deformed when the mid-Archean Tarssartoq dikes were intruded.

  2. Geomagnetic excursions in the past 60 ka: Ephemeral secular variation features

    NASA Astrophysics Data System (ADS)

    Thouveny, N.; Creer, K. M.

    1992-05-01

    Geomagnetic excursions have been reported for the past 25 years in both sedimentary and igneous rocks of Brunhes age and from widespread geographic localities. They comprise sequences of paleo-magnetic directions that are anomalous in that they depart widely from the range of geomagnetic north directions recorded through historic time; they have sometimes been interpreted as records of aborted reversals of polarity of the main geomagnetic dipole. The search for "excursions" sought to provide a set of stratigraphic markers. The case of the Laschamp "excursion," described in lava flows from the Chaîne des Puys (Massif Central, France), is analyzed here through a new sequential record of paleosecular variation recovered from sedimentary cores collected in Lac du Bouchet, a maar lake about 100 km from the Laschamp site. The absence of anomalous directions indicates that this excursion lasted for only a few centuries. This constitutes a warning to stratigraphers who attempt to use excursions as marker events, and it gives an insight on the behavior of Earth's geodynamo on the scale of 102 to 103 yr.

  3. Scaling fixed-field alternating gradient accelerators with a small orbit excursion.

    PubMed

    Machida, Shinji

    2009-10-16

    A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%. PMID:19905700

  4. Scaling Fixed-Field Alternating Gradient Accelerators with a Small Orbit Excursion

    SciTech Connect

    Machida, Shinji

    2009-10-16

    A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.

  5. Spinodal instabilities and super-Planckian excursions in natural inflation.

    PubMed

    Albrecht, Andreas; Holman, R; Richard, Benoit J

    2015-05-01

    Models such as Natural Inflation that use pseudo-Nambu-Goldstone bosons as the inflaton are attractive for many reasons. However, they typically require trans-Planckian field excursions ΔΦ>MPl, due to the need for an axion decay constant f>MPl to have both a sufficient number of e-folds and values of ns,r consistent with data. Such excursions would in general require the addition of all other higher dimension operators consistent with symmetries, thus disrupting the required flatness of the potential and rendering the theory nonpredictive. We show that in the case of Natural Inflation, the existence of spinodal instabilities (modes with tachyonic masses) can modify the inflaton equations of motion to the point that versions of the model with f

  6. Undulator Changes Due To Temperature Excursions

    SciTech Connect

    Wolf, Zachary; Levashov, Yurii; Reese, Ed; /SLAC

    2010-11-17

    The temperature of the LCLS undulators has not been controlled during storage. The effects of the temperature excursions are documented in this note. After a number of LCLS undulators were tuned, fiducialized, and placed in storage anticipating their use, a test was made to ensure that their properties had not changed. The test revealed, however, that indeed the undulators had changed. Detailed study of this problem followed. We now believe that the gap of the undulators changes permanently when the undulators go through temperature excursions. We have tested the other possible cause, transportation, and do not see gap changes. In this note, we document how the undulators have changed since they were originally tuned. The undulators were tuned and fiducialized in the Magnetic Measurement Facility (MMF). Afterward, many of them (approximately 18) were taken to building 750 for storage during summer and fall 2007. Building 750 had no temperature control. The undulator temperatures went from 20 C, used for tuning, down to approximately 11 C during the winter. In January 2008, three of the undulators were brought back to the MMF for a check. All three undulators showed similar changes. Trajectories, phases, and most undulator properties stayed the same, but the fiducialization (beam axis position relative to tooling balls on the undulator) had changed. Further investigation showed that the undulator gap was altered in a periodic way along the magnetic axis with a net average gap change causing the fiducialization change. A new storage location in building 33 was found and future undulators were placed there. A failure in the temperature control, however, caused the undulators to get too hot. Again the gap changed, but with a different periodic pattern. This note documents the measured changes in the undulators. In particular, it shows the detailed history of undulator 39 which went through both negative and positive temperature excursions.

  7. Data Mining Using Extensions of the Rough Set Model.

    ERIC Educational Resources Information Center

    Lingras, P. J.; Yao, Y. Y.

    1998-01-01

    Examines basic issues of data mining using the theory of rough sets, a recent proposal for generalizing classical set theory. Demonstrates that a generalized rough set model can be used for generating rules from incomplete databases. Discusses the importance of rule extraction from incomplete databases in data mining. (AEF)

  8. A Comparison of two Brunhes Chron Geomagnetic Excursions Recorded by Neighbouring North Atlantic Sites (ODP Sites 1062 and 1063)

    NASA Astrophysics Data System (ADS)

    Bourne, M.; Mac Niocaill, C.; Knudsen, M. F.; Thomas, A. L.; Henderson, G. M.

    2012-04-01

    A full picture of geomagnetic field behaviour during the Blake excursion is currently limited by a paucity of robust, high-resolution records of this ambiguous event. Some records seem to point towards a 'double-excursion' character whilst others fail to record the Blake excursion at all. We present here a high-resolution record of the Blake excursion obtained from Ocean Drilling Program (ODP) Site 1062 on the Blake Outer Ridge (ODP Leg 172). Palaeomagnetic measurements in three cores reveal a single excursional feature associated with a broad palaeointensity low, characterised by rapid transitions (less than 500 years) between a stable normal polarity and a fully-reversed, pseudo-stable polarity. A relatively high sedimentation rate (~10 cm kyr-1) allows the determination of transitional field behaviour during the excursion. Rather than assuming a constant sedimentation rate between assigned age tie-points, we employ measurements of 230Thxs concentrations in the sediment to assess variations in the sedimentation rates through the core sections of interest. This allows us to determine an age and duration for the two excursions with greater accuracy and known uncertainty. Our new age model gives an age of 127 ka for the midpoint of the Blake event at Site 1062. The age model also gives a duration for the directional excursion of 7.1±1.6 kyr. This duration is similar to that previously reported for the Iceland Basin Excursion (~185 ka) from the nearby Bermuda Rise (ODP Site 1063), which recorded a ~7-8 kyr event. Similarly, a high sedimentation rate (10-15 cm kyr-1) at this site allows a high-resolution reconstruction of the geomagnetic field behaviour during the Iceland Basin Excursion. The Site 1063 palaeomagnetic record suggests more complicated behaviour than that of the Blake excursion at Site 1062. Instead, transitional VGP paths are characterised by stop-and-go behaviour between VGP clusters that may be related to long-standing thermo-dynamic features of the

  9. The IIASA set of energy models: Its design and application

    NASA Astrophysics Data System (ADS)

    Basile, P. S.; Agnew, M.; Holzl, A.; Kononov, Y.; Papin, A.; Rogner, H. H.; Schrattenholzer, L.

    1980-12-01

    The models studied include an accounting framework type energy demand model, a dynamic linear programming energy supply and conversion system model, an input-output model, a macroeconomic model, and an oil trade gaming model. They are incorporated in an integrated set for long-term, global analyses. This set makes use of a highly iterative process for energy scenario projections and analyses. Each model is quite simple and straightforward in structure; a great deal of human judgement is necessary in applying the set. The models are applied to study two alternative energy scenarios for a coming fifty year period. Examples are presented revealing the wealth of information that can be obtained from multimodel techniques. Details are given for several models (equations employed, assumptions made, data used).

  10. Carbon Isotopic Excursions Associated with the Mid-Pleistocene Transition and the Mid-Brunhes Transition

    NASA Astrophysics Data System (ADS)

    Barth, A. M.; Bill, N. S.; Clark, P. U.; Pisias, N. G.

    2015-12-01

    During the last 2 Myr, the climate system experienced two major transitions in variability: the mid-Pleistocene Transition (MPT), which represents a shift from dominant low-amplitude 41-kyr frequencies to dominant high-amplitude 100-kyr frequencies, and the mid-Brunhes Transition (MBT), which represents an increase in the amplitude of the 100-kyr frequency. While the MPT and MBT are typically identified in the benthic marine δ18O stack, their expression in other components of the climate system is less clear. Pleistocene δ13C records have been used to characterize climate and ocean circulation changes in response to orbital forcing, but these studies have used either a limited number of records or stacked data sets, which have the potential to bias the variability from the large number of young records. Here we present those existing δ13C data sets (n=18) that completely span these transitions. We use empirical orthogonal functions (EOFs) on these continuous data sets rather than stacking, allowing the determination of the dominant modes of variability and characterization of the time-frequency variation during the last 2 Myr. Our results identify two substantial carbon isotopic excursions. The first is a pronounced negative excursion during the MPT (~900 ka, MIS 23) that stands out as the strongest minimum in the last 2 Myr (previously identified from five records by Raymo et al., 1997). Corresponding ɛNd data from the South Atlantic suggest a strong weakening of the Atlantic meridional overturning circulation through the MIS 23 interglacial associated with this excursion. The second is a robust positive excursion ~530 ka (MIS 13), prior to the MBT (MIS 11), which stands out as the strongest maximum in the last 2 Myr. Possible causes of these excursions will be discussed.

  11. A New High-Resolution Record of the Blake Geomagnetic Excursion from ODP Site 1062

    NASA Astrophysics Data System (ADS)

    Bourne, M. D.; Mac Niocaill, C.; Henderson, G. M.; Thomas, A. L.; Faurschou Knudsen, M.

    2010-12-01

    We present a high resolution record of the Blake geomagnetic excursion from Ocean Drilling Program (ODP) Site 1062 on the Blake-Bahama Outer Ridge. The excursion is recorded in three separate cores, with the high sedimentation rate (~10 cm/ka) at this location allowing the determination of transitional field behaviour during the excursion. A complex geometry is observed for the excursional geomagnetic field at the site. The directional records show an initial deviation from the expected directions across an interval of 1 m that achieves a completely reversed state, and then returns to normal polarity. A second, although less well-defined, short-lived phase of anomalous directions is observed immediately following the first event in two of the three cores. Measurements of the magnetic susceptibility show little variation through the core indicating that the concentration and grain size of the remanence carriers remain relatively constant during the studied interval. Measurements of the S-Ratio and remanence coercivity also remain constant through the sections of interest, and indicate magnetite to be the primary remanence carrier. The relatively homogeneous sediment enables the determination of two relative palaeointensity proxies by normalizing natural remanent magnetization measurements using artificially induced magnetizations (anhysteretic remanence, ARM and isothermal remanence, IRM). These records are consistent between all three cores. The relative palaeointensity proxies suggest that the Earth's magnetic field decreased substantially in intensity several tens of kyr prior to the initial event, before reaching an intensity minimum coinciding with the directional excursion maximum. A second palaeointensity minimum is also observed after the excursional event with no associated directional change. These features are consistent with global palaeointensity stacks. Our age model uses a new oxygen isotope stratigraphy. However, rather than assuming a constant

  12. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  13. Gaussian predictive process models for large spatial data sets

    PubMed Central

    Banerjee, Sudipto; Gelfand, Alan E.; Finley, Andrew O.; Sang, Huiyan

    2009-01-01

    Summary With scientific data available at geocoded locations, investigators are increasingly turning to spatial process models for carrying out statistical inference. Over the last decade, hierarchical models implemented through Markov chain Monte Carlo methods have become especially popular for spatial modelling, given their flexibility and power to fit models that would be infeasible with classical methods as well as their avoidance of possibly inappropriate asymptotics. However, fitting hierarchical spatial models often involves expensive matrix decompositions whose computational complexity increases in cubic order with the number of spatial locations, rendering such models infeasible for large spatial data sets. This computational burden is exacerbated in multivariate settings with several spatially dependent response variables. It is also aggravated when data are collected at frequent time points and spatiotemporal process models are used. With regard to this challenge, our contribution is to work with what we call predictive process models for spatial and spatiotemporal data. Every spatial (or spatiotemporal) process induces a predictive process model (in fact, arbitrarily many of them). The latter models project process realizations of the former to a lower dimensional subspace, thereby reducing the computational burden. Hence, we achieve the flexibility to accommodate non-stationary, non-Gaussian, possibly multivariate, possibly spatiotemporal processes in the context of large data sets. We discuss attractive theoretical properties of these predictive processes. We also provide a computational template encompassing these diverse settings. Finally, we illustrate the approach with simulated and real data sets. PMID:19750209

  14. Fuzzy Partition Models for Fitting a Set of Partitions.

    ERIC Educational Resources Information Center

    Gordon, A. D.; Vichi, M.

    2001-01-01

    Describes methods for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Describes and illustrates three models defining median partitions and compares these methods to an alternative approach to obtaining a consensus fuzzy partition. Discusses interesting differences in the results. (SLD)

  15. Fully Characterizing Axially Symmetric Szekeres Models with Three Data Sets

    NASA Astrophysics Data System (ADS)

    Célérier, Marie-Nöelle Mishra, Priti; Singh, Tejinder P.

    2015-01-01

    Inhomogeneous exact solutions of General Relativity with zero cosmological constant have been used in the literature to challenge the ΛCDM model. From one patch Lemaître-Tolman-Bondi (LTB) models to axially symmetric quasi-spherical Szekeres (QSS) Swiss-cheese models, some of them are able to reproduce to a good accuracy the cosmological data. It has been shown in the literature that a zero Λ LTB model with a central observer can be fully determined by two data sets. We demonstrate that an axially symmetric zero Λ QSS model with an observer located at the origin can be fully reconstructed from three data sets, number counts, luminosity distance and redshift drift. This is a first step towards a future demonstration involving five data sets and the most general Szekeres model.

  16. A Model Evaluation Data Set for the Tropical ARM Sites

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).

  17. AVO for one- and two-fracture set models

    USGS Publications Warehouse

    Chen, H.; Brown, R.L.; Castagna, J.P.

    2005-01-01

    A theoretical comparison is made of PP and PS angle-dependent reflection coefficients at the top of two fractured-reservoir models using exact, general, anisotropic reflection coefficients. The two vertical-fracture models are taken to have the same total crack density. The primary issue investigated is determination of the fracture orientation using azimuthal AVO analysis. The first model represents a single-fracture set and the second model has an additional fracture set oblique to the first set at an angle of 60??. As expected, the PP-wave anisotropy is reduced when multiple fracture sets are present, making the determination of orientation more difficult than for the case of a single-fracture set. Long offsets are required for identification of dominant fracture orientations using PP-wave AVO. PS-wave AVO, however, is quite sensitive to fracture orientations, even at short offsets. For multiple-fracture sets, PS signals can potentially be used to determine orientations of the individual sets. ?? 2005 Society of Exploration Geophysicists. All rights reserved.

  18. An intelligent diagnosis model based on rough set theory

    NASA Astrophysics Data System (ADS)

    Li, Ze; Huang, Hong-Xing; Zheng, Ye-Lu; Wang, Zhou-Yuan

    2013-03-01

    Along with the popularity of computer and rapid development of information technology, how to increase the accuracy of the agricultural diagnosis becomes a difficult problem of popularizing the agricultural expert system. Analyzing existing research, baseing on the knowledge acquisition technology of rough set theory, towards great sample data, we put forward a intelligent diagnosis model. Extract rough set decision table from the samples property, use decision table to categorize the inference relation, acquire property rules related to inference diagnosis, through the means of rough set knowledge reasoning algorithm to realize intelligent diagnosis. Finally, we validate this diagnosis model by experiments. Introduce the rough set theory to provide the agricultural expert system of great sample data a effective diagnosis model.

  19. Computerized reduction of elementary reaction sets for combustion modeling

    NASA Technical Reports Server (NTRS)

    Wikstrom, Carl V.

    1991-01-01

    If the entire set of elementary reactions is to be solved in the modeling of chemistry in computational fluid dynamics, a set of stiff ordinary differential equations must be integrated. Some of the reactions take place at very high rates, requiring short time steps, while others take place more slowly and make little progress in the short time step integration. The goal is to develop a procedure to automatically obtain sets of finite rate equations, consistent with a partial equilibrium assumptions, from an elementary set appropriate to local conditions. The possibility of computerized reaction reduction was demonstrated. However, the ability to use the reduced reaction set depends on the ability of the CFD approach in incorporate partial equilibrium calculations into the computer code. Therefore, the results should be tested on a code with partial equilibrium capability.

  20. Controllable set analysis for planetary landing under model uncertainties

    NASA Astrophysics Data System (ADS)

    Long, Jiateng; Gao, Ai; Cui, Pingyuan

    2015-07-01

    Controllable set analysis is a beneficial method in planetary landing mission design by feasible entry state selection in order to achieve landing accuracy and satisfy entry path constraints. In view of the severe impact of model uncertainties on planetary landing safety and accuracy, the purpose of this paper is to investigate the controllable set under uncertainties between on-board model and the real situation. Controllable set analysis under model uncertainties is composed of controllable union set (CUS) analysis and controllable intersection set (CIS) analysis. Definitions of CUS and CIS are demonstrated and computational method of them based on Gauss pseudospectral method is presented. Their applications on entry states distribution analysis under uncertainties and robustness of nominal entry state selection to uncertainties are illustrated by situations with ballistic coefficient, lift-to-drag ratio and atmospheric uncertainty in Mars entry. With analysis of CUS and CIS, the robustness of entry state selection and entry trajectory to model uncertainties can be guaranteed, thus enhancing the safety, reliability and accuracy under model uncertainties during planetary entry and landing.

  1. ODP Site 1063 (Bermuda Rise) revisited: Oxygen isotopes, excursions and paleointensity in the Brunhes Chron

    NASA Astrophysics Data System (ADS)

    Channell, J. E.; Hodell, D. A.

    2011-12-01

    An age model for the Brunhes Chron for Ocean Drilling Program (ODP) Site 1063 (Bermuda Rise) is based on the tandem correlation of oxygen isotope and relative paleointensity data to calibrated reference templates. Four intervals in the Brunhes Chron where component inclinations are negative, for both u-channel samples and discrete samples, are correlated to the following magnetic excursions with Site 1063 ages in brackets: Laschamp (41 ka), Blake (116 ka), Iceland Basin (190 ka), Pringle Falls (239 ka). These ages are consistent with current age estimates for these excursion, other than for "Pringle Falls" which has an apparent age older than current estimates by ~20-30 kyrs. For each of these excursions, virtual geomagnetic poles (VGPs) reach high southerly latitudes implying paired polarity reversals in a brief time span (<2 kyr in each case) that is several times shorter than the observed duration of long-lived polarity transitions at mid-latitudes. Several intervals of low component inclinations, that are low and negative in one case, are observed both in u-channel and discrete samples at ~318 ka (MIS 9), ~413 ka (MIS 11) and in the 500-600 ka interval (MIS14-15). These may constitute inadequately recorded excursions, or high amplitude secular variation.

  2. An experimental methodology for a fuzzy set preference model

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    A flexible fuzzy set preference model first requires approximate methodologies for implementation. Fuzzy sets must be defined for each individual consumer using computer software, requiring a minimum of time and expertise on the part of the consumer. The amount of information needed in defining sets must also be established. The model itself must adapt fully to the subject's choice of attributes (vague or precise), attribute levels, and importance weights. The resulting individual-level model should be fully adapted to each consumer. The methodologies needed to develop this model will be equally useful in a new generation of intelligent systems which interact with ordinary consumers, controlling electronic devices through fuzzy expert systems or making recommendations based on a variety of inputs. The power of personal computers and their acceptance by consumers has yet to be fully utilized to create interactive knowledge systems that fully adapt their function to the user. Understanding individual consumer preferences is critical to the design of new products and the estimation of demand (market share) for existing products, which in turn is an input to management systems concerned with production and distribution. The question of what to make, for whom to make it and how much to make requires an understanding of the customer's preferences and the trade-offs that exist between alternatives. Conjoint analysis is a widely used methodology which de-composes an overall preference for an object into a combination of preferences for its constituent parts (attributes such as taste and price), which are combined using an appropriate combination function. Preferences are often expressed using linguistic terms which cannot be represented in conjoint models. Current models are also not implemented an individual level, making it difficult to reach meaningful conclusions about the cause of an individual's behavior from an aggregate model. The combination of complex aggregate

  3. Setting up virgin stress conditions in discrete element models

    PubMed Central

    Rojek, J.; Karlis, G.F.; Malinowski, L.J.; Beer, G.

    2013-01-01

    In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain. PMID:27087731

  4. Setting conservation management thresholds using a novel participatory modeling approach.

    PubMed

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. PMID:26040608

  5. The Mono Lake Excursion Recorded in Phonolitic Lavas From Tenerife (Canary Islands): Paleomagnetic Analyses and Coupled K/Ar and Ar/Ar Dating.

    NASA Astrophysics Data System (ADS)

    Kissel, C.; Guillou, H.; Laj, C. E.; Carracedo, J. C.; Nomade, S.; Perez-Torrado, F.; Wandres, C.

    2011-12-01

    Studies of geomagnetic excursions is important for the knowledge of the geodynamo, and also because they may be used as precise time markers in various geological records (sediments, lavas and ice via their impact on the production of cosmogenic isotopes). In volcanic rocks, the identification of excursions is very challenging given the sporadic nature of the volcanic eruptions. However, it is a critical step because it allows absolute paleointensity determinations to be obtained, coupled with absolute dating methods. We present here a coupled paleomagnetic/dating investigation conducted on three different lava flows from the island of Tenerife (Canary Islands; Spain) erupted during the Mono Lake excursion (MLE). Paleomagnetic analyses consist in zero field demagnetizations (AF and/or thermal) and of Thellier and Thellier experiments using the PICRIT-03 set of criteria to select reliable intensity determinations. For dating, the unspiked K-Ar and the 40Ar/39Ar methods were coupled at LSCE for two of the flows and the third flow, with lower content in radiogenic 40Ar (40Ar*) was dated only using the unspiked K-Ar method. One of the flows is characterized by a direction largely deviated from the one expected from an axial geocentric dipole (GAD) field. Its paleointensity value is very low (7.8 μT). The two other sites are characterized by inclinations slightly shallower than the GAD value and by low intensity values (about 12 and 21 μT; present value: 38μT). The three K/Ar ages combined with two 40Ar/39Ar ages range from 32.0 to 33.2 ka and they are not statistically distinguishable from one another. It therefore appears that these lavas have recorded the MLE (the only excursion in this time interval) confirming its brief duration (shorter than the minimum age uncertainties available). The mean age is younger but, within the uncertainties and depending on the age of the standard we use, consistent with the age of the 10Be peak and of the marine intensity low when

  6. The Record of Geomagnetic Excursions from a ~150 m Sediment Core: Clear Lake, Northern California

    NASA Astrophysics Data System (ADS)

    Levin, E.; Byrne, R.; Looy, C. V.; Wahl, D.; Noren, A. J.; Verosub, K. L.

    2015-12-01

    We are studying the paleomagnetic properties of a new ~150 meter drill core from Clear Lake, CA. Step-wise demagnetization of the natural remanent magnetism (NRM) yields stable directions after 20 mT, implying that the sediments are reliable recorders of geomagnetic field behavior. Several intervals of low relative paleointensity (RPI) from the core appear to be correlated with known geomagnetic excursions. At about 46 m depth, and ~33 ka according to an age model based on radiocarbon dates obtained from pollen and the Olema ash bed, a low RPI zone seems to agree with the age and duration of the Mono Lake Excursion, previously identified between 32 and 35 ka. Slightly lower in the core, at about 50 m depth and ~40 ka, noticeably low RPI values seem to be coeval with the Laschamp excursion, which has been dated at ~41 ka. A volcanic ash near the bottom of the core (141 mblf) is near the same depth as an ash identified in 1988 by Andrei Sarna-Wojcicki and others as the Loleta ash bed in a previous Clear Lake core. If the basal ash in the new core is indeed the, Loleta ash bed, then the core may date back to about 270-300 ka. Depending on the age of the lowest ash, a sequence of low RPI intervals could correlate with the Blake (120 ka), Iceland Basin (188 ka), Jamaica/Pringle Falls (211 ka), and CR0 (260 ka) excursions. Correlation of the low RPI intervals to these geomagnetic excursions will help in the development of a higher resolution chronostratigraphy for the core, resolve a long-standing controversy about a possible hiatus in the Clear Lake record, and provide information about climatically-driven changes in sedimentation.

  7. DIFFEOMORPHIC POINT SET REGISTRATION USING NON-STATIONARY MIXTURE MODELS

    PubMed Central

    Wassermann, D.; Ross, J.; Washko, G.; Westin, C-F; Estépar, R. San José

    2013-01-01

    This paper investigates a diffeomorphic point-set registration based on non-stationary mixture models. The goal is to improve the non-linear registration of anatomical structures by representing each point as a general non-stationary kernel that provides information about the shape of that point. Our framework generalizes work done by others that use stationary models. We achieve this by integrating the shape at each point when calculating the point-set similarity and transforming it according to the calculated deformation. We also restrict the non-rigid transform to the space of symmetric diffeomorphisms. Our algorithm is validated in synthetic and human datasets in two different applications: fiber bundle and lung airways registration. Our results shows that non-stationary mixture models are superior to Gaussian mixture models and methods that do not take into account the shape of each point. PMID:24419463

  8. Geomagnetic excursions date early hominid migration to China

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-09-01

    Global-scale geomagnetic reversals, which are periods when the direction of Earth's magnetic field flips, leave imprints in magnetic minerals present in sediments. But so do smaller-scale, even local, changes in Earth's magnetic field direction. Paleomagnetists believe that the smaller-scale events represent “failed reversals” and refer to them as “geomagnetic excursions.” Scientists use geomagnetic excursions in sedimentary basins as markers to tie together events of Earth's history across the globe.

  9. Using Set Model for Learning Addition of Integers

    ERIC Educational Resources Information Center

    Lestari, Umi Puji; Putri, Ratu Ilma Indra; Hartono, Yusuf

    2015-01-01

    This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the…

  10. Instruction manual model 600F, data transmission test set

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Information necessary for the operation and maintenance of the Model 600F Data Transmission Test Set is presented. A description is contained of the physical and functional characteristics; pertinent installation data; instructions for operating the equipment; general and detailed principles of operation; preventive and corrective maintenance procedures; and block, logic, and component layout diagrams of the equipment and its major component assemblies.

  11. A Novel Multipurpose Model Set for Teaching General Chemistry.

    ERIC Educational Resources Information Center

    Gupta, H. O.; Parkash, Brahm

    1999-01-01

    Reports on a low-cost and unique molecular model set capable of generating a large number of structures for teaching and learning general chemistry. An important component of the kit is an 11-hole ball that gives tetrahedral, octahedral, trigonal, trigonal bipyramidal, and square planar symmetries. (WRM)

  12. The Blake geomagnetic excursion recorded in a radiometrically dated speleothem

    NASA Astrophysics Data System (ADS)

    Osete, María-Luisa; Martín-Chivelet, Javier; Rossi, Carlos; Edwards, R. Lawrence; Egli, Ramon; Muñoz-García, M. Belén; Wang, Xianfeng; Pavón-Carrasco, F. Javier; Heller, Friedrich

    2012-11-01

    One of the most important developments in geomagnetism has been the recognition of polarity excursions of the Earth's magnetic field. Accurate timing of the excursions is a key point for understanding the geodynamo process and for magnetostratigraphic correlation. One of the best-known excursions is the Blake geomagnetic episode, which occurred during marine isotope stage MIS 5, but its morphology and age remain controversial. Here we show, for the first time, the Blake excursion recorded in a stalagmite which was dated using the uranium-series disequilibrium techniques. The characteristic remanent magnetisation is carried by fine-grained magnetite. The event is documented by two reversed intervals (B1 and B2). The age of the event is estimated to be between 116.5±0.7 kyr BP and 112.0±1.9 kyr BP, slightly younger (∼3-4 kyr) than recent estimations from sedimentary records dated by astronomical tuning. Low values of relative palaeointensity during the Blake episode are estimated, but a relative maximum in the palaeofield intensity coeval with the complete reversal during the B2 interval was observed. Duration of the Blake geomagnetic excursion is 4.5 kyr, two times lower than single excursions and slightly higher than the estimated diffusion time for the inner core (∼3 kyr).

  13. A fuzzy set preference model for market share analysis

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share

  14. A New High-Resolution Record of the Blake Geomagnetic Excursion from ODP Site 1062

    NASA Astrophysics Data System (ADS)

    Bourne, Mark; Mac Niocaill, Conall; Henderson, Gideon; Thomas, Alex; Knudsen, Mads

    2010-05-01

    We present a high resolution record of the Blake geomagnetic excursion from Ocean Drilling Program (ODP) Site 1062 on the Blake-Bahama Outer Ridge. The excursion is recorded in three separate cores, with the high average sedimentation rate (10 cm ka-1) at this location allowing the determination of transitional field behaviour during the excursion. A complex geometry is observed for the excursional geomagnetic field. The directional records show an initial deviation from the expected directions across an interval of 1 m that achieves a completely reversed state, and then returns to normal polarity. A second, although less well-defined, short-lived phase of anomalous directions is observed immediately following the first event in two of the three cores. Measurements of the magnetic susceptibility show little variation through the core indicating that the concentration and grain size of the remanence carriers remains relatively constant throughout the studied interval. Measurements of the S-Ratio and remanence coercivity also remain constant through the core sections of interest, and indicate magnetite to be the primary remanence carrier. The relatively homogeneous sediment enables the determination of two relative palaeointensity proxies by normalizing natural remanent magnetization measurements using artificially induced magnetizations (anhysteretic remanence, ARM and isothermal remanence, IRM). These records are consistent between all three cores. The relative palaeointensity proxies suggest that the Earth's magnetic field decreased substantially in intensity up to 70 ka prior to the initial event, before reaching an intensity minimum coinciding with the directional excursion maximum. A second palaeointensity minimum is also observed after the excursional event with no associated directional change. These features are consistent with global palaeointensity stacks. A preliminary age model based on an oxygen isotope stratigraphy, and an average sedimentation rate

  15. A compressed marine data set for geomagnetic field modeling

    NASA Technical Reports Server (NTRS)

    Langel, R. A.; Baldwin, R. T.; Ridgway, J. R.; Davis, W. Minor

    1990-01-01

    Some 13 million scalar magnetic field data points that have been collected from the world's ocean areas reside in the collection of the National Geophysical Data Center. In order to derive a suitable data set for modeling the geomagnetic field of the earth, each ship track is divided into 220 km segments. The distribution of the reduced data in position, time and local time is discussed. The along-track filtering process described has proved to be an effective method of condensing large numbers of shipborne magnetic data into a manageable and meaningful data set for field modeling. This process also provides the benefits of smoothing short-wavelength crystal anomalies, discarding data recorded during magnetically noisy periods, and assigning reasonable error estimates to be utilized in the least squares modeling.

  16. Cardiac rehabilitation delivery model for low-resource settings

    PubMed Central

    Grace, Sherry L; Turk-Adawi, Karam I; Contractor, Aashish; Atrey, Alison; Campbell, Norm; Derman, Wayne; Melo Ghisi, Gabriela L; Oldridge, Neil; Sarkar, Bidyut K; Yeo, Tee Joo; Lopez-Jimenez, Francisco; Mendis, Shanthi; Oh, Paul; Hu, Dayi; Sarrafzadegan, Nizal

    2016-01-01

    Objective Cardiovascular disease is a global epidemic, which is largely preventable. Cardiac rehabilitation (CR) is demonstrated to be cost-effective and efficacious in high-income countries. CR could represent an important approach to mitigate the epidemic of cardiovascular disease in lower-resource settings. The purpose of this consensus statement was to review low-cost approaches to delivering the core components of CR, to propose a testable model of CR which could feasibly be delivered in middle-income countries. Methods A literature review regarding delivery of each core CR component, namely: (1) lifestyle risk factor management (ie, physical activity, diet, tobacco and mental health), (2) medical risk factor management (eg, lipid control, blood pressure control), (3) education for self-management and (4) return to work, in low-resource settings was undertaken. Recommendations were developed based on identified articles, using a modified GRADE approach where evidence in a low-resource setting was available, or consensus where evidence was not. Results Available data on cost of CR delivery in low-resource settings suggests it is not feasible to deliver CR in low-resource settings as is delivered in high-resource ones. Strategies which can be implemented to deliver all of the core CR components in low-resource settings were summarised in practice recommendations, and approaches to patient assessment proffered. It is suggested that CR be adapted by delivery by non-physician healthcare workers, in non-clinical settings. Conclusions Advocacy to achieve political commitment for broad delivery of adapted CR services in low-resource settings is needed. PMID:27181874

  17. A mesoscopic network model for permanent set in crosslinked elastomers

    SciTech Connect

    Weisgraber, T H; Gee, R H; Maiti, A; Clague, D S; Chinn, S; Maxwell, R S

    2009-01-29

    A mesoscopic computational model for polymer networks and composites is developed as a coarse-grained representation of the composite microstructure. Unlike more complex molecular dynamics simulations, the model only considers the effects of crosslinks on mechanical behavior. The elastic modulus, which depends only on the crosslink density and parameters in the bond potential, is consistent with rubber elasticity theory, and the network response satisfies the independent network hypothesis of Tobolsky. The model, when applied to a commercial filled silicone elastomer, quantitatively reproduces the experimental permanent set and stress-strain response due to changes in the crosslinked network from irradiation.

  18. Late Brunhes polarity excursions (Mono Lake, Laschamp, Iceland Basin and Pringle Falls) recorded at ODP Site 919 (Irminger Basin)

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.

    2006-04-01

    Component natural remanent magnetizations derived from u-channel and 1-cm 3 discrete samples from ODP Site 919 (Irminger Basin) indicate the existence of four intervals of negative inclinations in the upper Brunhes Chronozone. According to the age model based on planktic oxygen isotope data, these "excursional" intervals occur in sediments deposited during the following time intervals: 32-34 ka, 39-41 ka, 180-188 ka and 205-225 ka. These time intervals correspond to polarity excursions detected elsewhere, known as Mono Lake, Laschamp, Iceland Basin and Pringle Falls. The isotope-based age model is supported by the normalized remanence (paleointensity) record that can be correlated to other calibrated paleointensity records for the 0-500 ka interval, such as that from ODP Site 983. For the intervals associated with the Mono Lake and Laschamp excursions, virtual geomagnetic poles (VGPs) reach equatorial latitudes and mid-southerly latitudes, respectively. For intervals associated with the Iceland Basin and Pringle Falls excursions, repeated excursions of VGPs to high southerly latitudes indicate rapid directional swings rather than a single short-lived polarity reversal. The directional instability associated with polarity excursions is not often recorded, probably due to smoothing of the sedimentary record by the process of detrital remanence (DRM) acquisition.

  19. Experiment of nitrox saturation diving with trimix excursion.

    PubMed

    Shi, Z Y

    1998-11-01

    Depth limitations to diving operation with air as the breathing gas are well known: air density, oxygen toxicity, nitrogen narcosis and requirement for decompression. The main objectives of our experiment were to assess the decompression, counterdiffusion and performance aspect of helium-nitrogen-oxygen excursions from nitrox saturation. The experiment was carried out in a wet diving stimulator with "igloo" attached to a 2-lock living chamber. Four subjects of two teams of 2 divers were saturated at 25 msw simulated depth in a nitrogen oxygen chamber environment for 8 days, during which period they performed 32 divers-excursions to 60 or 80 msw pressure. Excursion gas mix was trimix of 14.6% oxygen, 50% helium and 35.4% nitrogen, which gave a bottom oxygen partial pressure of 1.0 bars at 60 msw and 1.3 at 80 msw. Excursions were for 70 min at 60 msw with three 10-min work periods and 40 min at 80 msw with two 10-min work periods. Work was on a bicycle ergometer at a moderate level. We calculated the excursion decompression with M-Values based on methods of Hamilton (Hamilton et al., 1990). Staged decompression took 70 min for the 60 msw excursion and 98 min for 80 msw, with stops beginning at 34 or 43 msw respectively. After the second dive day bubbles were heard mainly in one diver but in three divers overall, to Spencer Grade III some times. No symptoms were reported. Saturation decompression using the Repex procedures began at 40 msw and was uneventful: Grade II and sometimes III bubbles persisted in 2 of the four divers until 24 hr after surfacing. We conclude that excursions with mixture rich in helium can be performed effectively to as deep as 80 msw using these procedures. PMID:10052222

  20. Outpatient Assessment of Determinants of Glucose Excursions in Adolescents with Type 1 Diabetes: Proof of Concept

    PubMed Central

    Mayer-Davis, Elizabeth; Bishop, Franziska K.; Wang, Lily; Mangan, Meg; McMurray, Robert G.

    2012-01-01

    Abstract Objective Controlled inpatient studies on the effects of food, physical activity (PA), and insulin dosing on glucose excursions exist, but such outpatient data are limited. We report here outpatient data on glucose excursions and its key determinants over 5 days in 30 adolescents with type 1 diabetes (T1D) as a proof-of-principle pilot study. Subjects and Methods Subjects (20 on insulin pumps, 10 receiving multiple daily injections; 15±2 years old; diabetes duration, 8±4 years; hemoglobin A1c, 8.1±1.0%) wore a continuous glucose monitor (CGM) and an accelerometer for 5 days. Subjects continued their existing insulin regimens, and time-stamped insulin dosing data were obtained from insulin pump downloads or insulin pen digital logs. Time-stamped cell phone photographs of food pre- and post-consumption and food logs were used to augment 24-h dietary recalls for Days 1 and 3. These variables were incorporated into regression models to predict glucose excursions at 1–4 h post-breakfast. Results CGM data on both Days 1 and 3 were obtained in 57 of the possible 60 subject-days with an average of 125 daily CGM readings (out of a possible 144). PA and dietary recall data were obtained in 100% and 93% of subjects on Day 1 and 90% and 100% of subjects on Day 3, respectively. All of these variables influenced glucose excursions at 1–4 h after waking, and 56 of the 60 subject-days contributed to the modeling analysis. Conclusions Outpatient high-resolution time-stamped data on the main inputs of glucose variability in adolescents with T1D are feasible and can be modeled. Future applications include using these data for in silico modeling and for monitoring outpatient iterations of closed-loop studies, as well as to improve clinical advice regarding insulin dosing to match diet and PA behaviors. PMID:22853720

  1. Modeling uncertainty in reservoir loss functions using fuzzy sets

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Simonovic, Slobodan P.

    1999-09-01

    Imprecision involved in the definition of reservoir loss functions is addressed using fuzzy set theory concepts. A reservoir operation problem is solved using the concepts of fuzzy mathematical programming. Membership functions from fuzzy set theory are used to represent the decision maker's preferences in the definition of shape of loss curves. These functions are assumed to be known and are used to model the uncertainties. Linear and nonlinear optimization models are developed under fuzzy environment. A new approach is presented that involves development of compromise reservoir operating policies based on the rules from the traditional optimization models and their fuzzy equivalents while considering the preferences of the decision maker. The imprecision associated with the definition of penalty and storage zones and uncertainty in the penalty coefficients are the main issues addressed through this study. The models developed are applied to the Green Reservoir, Kentucky. Simulations are performed to evaluate the operating rules generated by the models considering the uncertainties in the loss functions. Results indicate that the reservoir operating policies are sensitive to change in the shapes of loss functions.

  2. HDU Pressurized Excursion Module (PEM) Prototype Systems Integration

    NASA Technical Reports Server (NTRS)

    Gill, Tracy R.; Kennedy, Kriss; Tri, Terry; Toups, Larry; Howe, A. Scott

    2010-01-01

    The Habitat Demonstration Unit (HDU) project team constructed an analog prototype lunar surface laboratory called the Pressurized Excursion Module (PEM). The prototype unit subsystems were integrated in a short amount of time, utilizing a skunk-works approach that brought together over 20 habitation-related technologies from a variety of NASA centers. This paper describes the system integration strategies and lessons learned, that allowed the PEM to be brought from paper design to working field prototype using a multi-center team. The system integration process included establishment of design standards, negotiation of interfaces between subsystems, and scheduling fit checks and installation activities. A major tool used in integration was a coordinated effort to accurately model all the subsystems using CAD, so that conflicts were identified before physical components came together. Some of the major conclusions showed that up-front modularity that emerged as an artifact of construction, such as the eight 45 degree "pie slices" making up the module whose steel rib edges defined structural mounting and loading points, dictated much of the configurational interfaces between the major subsystems and workstations. Therefore, 'one of the lessons learned included the need to use modularity as a tool for organization in advance, and to work harder to prevent non-critical aspects of the platform from dictating the modularity that may eventually inform the fight system.

  3. Maximizing Social Model Principles in Residential Recovery Settings

    PubMed Central

    Polcin, Douglas; Mericle, Amy; Howell, Jason; Sheridan, Dave; Christensen, Jeff

    2014-01-01

    Abstract Peer support is integral to a variety of approaches to alcohol and drug problems. However, there is limited information about the best ways to facilitate it. The “social model” approach developed in California offers useful suggestions for facilitating peer support in residential recovery settings. Key principles include using 12-step or other mutual-help group strategies to create and facilitate a recovery environment, involving program participants in decision making and facility governance, using personal recovery experience as a way to help others, and emphasizing recovery as an interaction between the individual and their environment. Although limited in number, studies have shown favorable outcomes for social model programs. Knowledge about social model recovery and how to use it to facilitate peer support in residential recovery homes varies among providers. This article presents specific, practical suggestions for enhancing social model principles in ways that facilitate peer support in a range of recovery residences. PMID:25364996

  4. The Mono Lake excursion recorded in phonolitic lavas from Tenerife (Canary Islands): Paleomagnetic analyses and coupled K/Ar and Ar/Ar dating

    NASA Astrophysics Data System (ADS)

    Kissel, C.; Guillou, H.; Laj, C.; Carracedo, J. C.; Nomade, S.; Perez-Torrado, F.; Wandres, C.

    2011-08-01

    We present a coupled paleomagnetic/dating investigation conducted on three different lava flows from the island of Tenerife (Canary Islands; Spain) erupted during the Mono Lake excursion (MLE). Paleomagnetic analyses consist in zero field demagnetizations (AF and/or thermal) and of Thellier and Thellier experiments using the PICRIT-03 set of criteria to select reliable intensity determinations. One of the flows is characterized by a direction largely deviated from the one expected from an axial geocentric dipole (GAD) field. Its paleointensity value is very low (7.8 μT). The two other sites are characterized by inclinations slightly shallower than the GAD value and by low intensity values (about 12 and 21 μT; present value: 38 μT). The three K/Ar ages combined with two 40Ar/ 39Ar ages range from 32.0 to 33.2 ka and they are not statistically distinguishable from one another. It therefore appears that these lavas have recorded the MLE (the only excursion in this time interval) confirming its brief duration (shorter than the minimum age uncertainties available). The mean age is younger but, within the uncertainties, consistent with the age of the 10Be peak and of the marine intensity low when reported in the most recent ice age model. These new results are the first ones with radiometric dating produced from the northern hemisphere. Combined with existing cosmogenic, marine and volcanic paleomagnetic data, these results are discussed in terms of dating, and geometry of the earth magnetic field during the excursion.

  5. Level Set Segmentation of Lumbar Vertebrae Using Appearance Models

    NASA Astrophysics Data System (ADS)

    Fritscher, Karl; Leber, Stefan; Schmölz, Werner; Schubert, Rainer

    For the planning of surgical interventions of the spine exact knowledge about 3D shape and the local bone quality of vertebrae are of great importance in order to estimate the anchorage strength of screws or implants. As a prerequisite for quantitative analysis a method for objective and therefore automated segmentation of vertebrae is needed. In this paper a framework for the automatic segmentation of vertebrae using 3D appearance models in a level set framework is presented. In this framework model information as well as gradient information and probabilities of pixel intensities at object edges in the unseen image are used. The method is tested on 29 lumbar vertebrae leading to accurate results, which can be useful for surgical planning and further analysis of the local bone quality.

  6. Ocean sunfish rewarm at the surface after deep excursions to forage for siphonophores.

    PubMed

    Nakamura, Itsumi; Goto, Yusuke; Sato, Katsufumi

    2015-05-01

    Ocean sunfish (Mola mola) were believed to be inactive jellyfish feeders because they are often observed lying motionless at the sea surface. Recent tracking studies revealed that they are actually deep divers, but there has been no evidence of foraging in deep water. Furthermore, the surfacing behaviour of ocean sunfish was thought to be related to behavioural thermoregulation, but there was no record of sunfish body temperature. Evidence of ocean sunfish feeding in deep water was obtained using a combination of an animal-borne accelerometer and camera with a light source. Siphonophores were the most abundant prey items captured by ocean sunfish and were typically located at a depth of 50-200 m where the water temperature was <12 °C. Ocean sunfish were diurnally active, made frequently deep excursions and foraged mainly at 100-200 m depths during the day. Ocean sunfish body temperatures were measured under natural conditions. The body temperatures decreased during deep excursions and recovered during subsequent surfacing periods. Heat-budget models indicated that the whole-body heat-transfer coefficient between sunfish and the surrounding water during warming was 3-7 times greater than that during cooling. These results suggest that the main function of surfacing is the recovery of body temperature, and the fish might be able to increase heat gain from the warm surface water by physiological regulation. The thermal environment of ocean sunfish foraging depths was lower than their thermal preference (c. 16-17 °C). The behavioural and physiological thermoregulation enables the fish to increase foraging time in deep, cold water. Feeding rate during deep excursions was not related to duration or depth of the deep excursions. Cycles of deep foraging and surface warming were explained by a foraging strategy, to maximize foraging time with maintaining body temperature by vertical temperature environment. PMID:25643743

  7. Geomagnetic excursions in the Brunhes and Matuyama Chrons: Do they come in bunches?

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.

    2012-04-01

    Geomagnetic excursions, defined here as brief directional aberrations of the main dipole field outside the range of expected secular variation, remain controversial. Poorly-correlated records of apparent excursions from lavas and sediments can often be assigned to sampling artifacts, sedimentological phenomena, volcanic terrane effects, or local secular variation, rather than behavior of the main dipole field. Although records of magnetic excursions date from the 1960s, the number of Brunhes excursions in recent reviews of the subject have reached the 12-17 range, of which only about ~7 are adequately and/or consistently recorded. For the Matuyama Chron, the current inventory of excursions stands at about 10. The better quality excursion records, with reasonable age control, imply millennial-scale or even sub-millennial-scale durations. When "adequately" recorded, excursions are manifest as paired polarity reversals flanking virtual geomagnetic poles (VGPs) that reach high latitudes in the opposite hemisphere. At the young end of the excursion record, the Mono Lake (~33 ka) and Laschamp (~41 ka) excursions are well documented, although records of the former are not widely distributed. Several excursions younger than the Mono Lake excursion (at 17 ka and 25 ka) have recently been recorded in lavas and sediments, respectively. Is the 17-41 ka interval characterized by multiple excursions? Similarly, multiple excursions have been recorded in the 188-238 ka interval that encompasses records of the Iceland Basin excursion (~188 ka) and the Pringle Falls (PF) excursion. The PF excursion has been assigned ages in the 211-238 ka range. Does this mean that this interval is also characterized by several discrete excursions? The 500-600 ka interval incorporates not only the Big Lost excursion at ~565 ka, but also anomalous magnetization directions from lava flows, particularly in the West Eifel volcanics that yield mid-latitude northern-hemisphere VGPs with a range of Ar

  8. A Unified Sea Ice Thickness Data Set for Model Validation

    NASA Astrophysics Data System (ADS)

    Lindsay, R.; Wensnahan, M.

    2007-12-01

    Can we, as a community, do better at using existing ice thickness measurements to more effectively evaluate the changing nature of the Arctic ice pack and to better evaluate the performance of our models? We think we can if we work together. We are trying to create a unified ice thickness data set by combining observations from various ice thickness measurement systems. It is designed to facilitate the intercomparison of different measurements, the evaluation of the state of the ice pack, and the validation of sea ice models. Datasets that might be included are ice draft estimates from various submarine and moored upward looking sonar instruments, ice thickness estimates from airborne electromagnetic instruments, and satellite altimeter freeboard measurements. Three principles for the proposed data set are: 1) Full documentation of data sources and characteristics, 2) Spatial and temporal averaging to approximately common scales, and 3) Common data formats. We would not mix data types and we would not interpolate to locations or times not represented in the observations. The target spatial and temporal scale for the measurements would be 50 lineal km of ice and/or one month. Point measurements are not so useful in this context. Data from both hemispheres and any body of ocean water would be included. Documentation would include locations, times, measurement methods, processing, snow depth assumptions, averaging distance and time, error characteristics, data provider, and more. The cooperation and collaboration of the various data providers is essential to the success of this project and so far we have had a very gratifying response to our overtures. We would like to hear from any who have not heard from us and who have collected sea ice thickness data at the approximate target scales. With potentially thousands of individual samples, much could be learned about the measurement systems, about the changing state of the ice cover, and about ice model performance and

  9. Super_Prompt Crit excursions in Sph Geometry

    Energy Science and Technology Software Center (ESTSC)

    2000-03-17

    AX-TNT solves (a) the coupled hydrodynamic, thermodynamical neutronic equations which describe a spherical, super prompt critical reactor system during an excursion. (b) the coupled equations of motion, and ideal gas equation of state for the detonation of a spherical charge in a gas.

  10. Nongeocentric axial dipole field behavior during the Mono Lake excursion

    NASA Astrophysics Data System (ADS)

    Negrini, Robert M.; McCuan, Daniel T.; Horton, Robert A.; Lopez, James D.; Cassata, William S.; Channell, James E. T.; Verosub, Kenneth L.; Knott, Jeffrey R.; Coe, Robert S.; Liddicoat, Joseph C.; Lund, Steven P.; Benson, Larry V.; Sarna-Wojcicki, Andrei M.

    2014-04-01

    A new record of the Mono Lake excursion (MLE) is reported from the Summer Lake Basin of Oregon, USA. Sediment magnetic properties indicate magnetite as the magnetization carrier and imply suitability of the sediments as accurate recorders of the magnetic field including relative paleointensity (RPI) variations. The magnitudes and phases of the declination, inclination, and RPI components of the new record correlate well with other coeval but lower resolution records from western North America including records from the Wilson Creek Formation exposed around Mono Lake. The virtual geomagnetic pole (VGP) path of the new record is similar to that from another high-resolution record of the MLE from Ocean Drilling Program (ODP) Site 919 in the Irminger Basin between Iceland and Greenland but different from the VGP path for the Laschamp excursion (LE), including that found lower in the ODP-919 core. Thus, the prominent excursion recorded at Mono Lake, California, is not the LE but rather one that is several thousands of years younger. The MLE VGP path contains clusters, the locations of which coincide with nonaxial dipole features found in the Holocene geomagnetic field. The clusters are occupied in the same time progression by VGPs from Summer Lake and the Irminger Basin, but the phase of occupation is offset, a behavior that suggests time-transgressive decay and return of the principal field components at the beginning and end of the MLE, respectively, leaving the nonaxial dipole features associated with the clusters dominant during the excursion.

  11. 40 CFR 63.1334 - Parameter monitoring levels and excursions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Parameter monitoring levels and... Parameter monitoring levels and excursions. (a) Establishment of parameter monitoring levels. The owner or operator of a control or recovery device that has one or more parameter monitoring level...

  12. 40 CFR 63.1438 - Parameter monitoring levels and excursions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... caused by an activity that violates other applicable provisions of 40 CFR part 63, subparts A, F, G, or H... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Parameter monitoring levels and....1438 Parameter monitoring levels and excursions. (a) Establishment of parameter monitoring levels....

  13. Potential Cislunar and Interplanetary Proving Ground Excursion Trajectory Concepts

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Strange, Nathan J.; Burke, Laura M.; MacDonald, Mark A.; McElrath, Timothy P.; Landau, Damon F.; Lantoine, Gregory; Hack, Kurt J.; Lopez, Pedro

    2016-01-01

    NASA has been investigating potential translunar excursion concepts to take place in the 2020s that would be used to test and demonstrate long duration life support and other systems needed for eventual Mars missions in the 2030s. These potential trajectory concepts could be conducted in the proving ground, a region of cislunar and near-Earth interplanetary space where international space agencies could cooperate to develop the technologies needed for interplanetary spaceflight. Enabled by high power Solar Electric Propulsion (SEP) technologies, the excursion trajectory concepts studied are grouped into three classes of increasing distance from the Earth and increasing technical difficulty: the first class of excursion trajectory concepts would represent a 90-120 day round trip trajectory with abort to Earth options throughout the entire length, the second class would be a 180-210 day round trip trajectory with periods in which aborts would not be available, and the third would be a 300-400 day round trip trajectory without aborts for most of the length of the trip. This paper provides a top-level summary of the trajectory and mission design of representative example missions of these three classes of excursion trajectory concepts.

  14. 5. Photocopy of photograph. JANE MOSELEY (VESSEL 53) DURING EXCURSION, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Photocopy of photograph. JANE MOSELEY (VESSEL 53) DURING EXCURSION, DETAIL OF FORE DECK AND CABIN AREA. Date end photographer unknown. (Original in Mariners Museum, Newport News, Virginia, negative #PB 30017) - Shooters Island, Ships Graveyard, Vessel No. 53, Newark Bay, Staten Island (subdivision), Richmond County, NY

  15. Stratigraphic Correlation of an Excursion at 22 kyr in the U.S. Great Basin - the Hilina Pali Excursion?

    NASA Astrophysics Data System (ADS)

    Liddicoat, J. C.; Coe, R. S.

    2013-12-01

    An unusually large secular variation of the geomagnetic field recorded in outcrops of pluvial Lake Russell sediment about 22 kyr old in the Mono Basin, CA, can be used for precise correlation to other lacustrine sections in the western U.S., and perhaps beyond. We present new AF and thermal demagnetization results for paired samples at 2-cm intervals between ash layers 7 and 8 of Lajoie (1968) in the bank of Wilson Creek that document an excursion having an inclination as low as 16 degrees and as high as 73 degrees, while the declination swings from 15 degrees west during the low inclination to 30 east when the inclination is high, and back to average northerly declination and expected inclination. The corresponding VGPs form a narrow clockwise-trending loop centered at about 50 N, 30 E. The Mono Lake Excursion (MLE; Liddicoat and Coe, 1979) occurs 1.7 m lower in the same section. The best estimates for the ages of the two excursions are about 22 and 32 kyr, based on 14-C dates (Cassata et al., 2010). About 150 km to the north, sediments of about the same age exposed along the Truckee River that were deposited in pluvial Lake Lahontan record a similar geomagnetic signature. Moreover, both the MLE and this excursion are exhibited at the appropriate levels in a sediment core taken from Pyramid Lake, the remnant of Lake Lahontan (Benson et al., 2008). Thus, this excursion is a valuable marker for high-resolution correlation of Quaternary sediments in the western U.S., especially when paired with the MLE. It is tempting to try to identify this geomagnetic feature with others of about the same age further away. On the island of Hawaii, Coe et al. (1978) discovered a lava flow on the Hilina Pali with a calibrated 14-C age of 21 +/-1 kyr that has an inclination about 30 degrees shallower and a paleointensity 60 percent lower than today. Later Teanby et al. (2002) documented an excursion with inclinations as low as -35 degrees, recorded by around 40 successive flows with

  16. Groundwater Quality Modeling with a Small Data Set.

    PubMed

    Sakizadeh, Mohamad; Malian, Abbass; Ahmadpour, Eisa

    2016-01-01

    Seventeen groundwater quality variables collected during an 8-year period (2006 to 2013) in Andimeshk, Iran, were used to implement an artificial neural network (NN) with the purpose of constructing a water quality index (WQI). The method leading to the WQI avoids instabilities and overparameterization, two problems common when working with relatively small data sets. The groundwater quality variables used to construct the WQI were selected based on principal component analysis (PCA) by which the number of variables were decreased to six. To fulfill the goals of this study, the performance of three methods (1) bootstrap aggregation with early stopping; (2) noise injection; and (3) ensemble averaging with early stopping was compared. The criteria used for performance analysis was based on mean squared error (MSE) and coefficient of determination (R(2) ) of the test data set and the correlation coefficients between WQI targets and NN predictions. This study confirmed the importance of PCA for variable selection and dimensionality reduction to reduce the risk of overfitting. Ensemble averaging with early stopping proved to be the best performed method. Owing to its high coefficient of determination (R(2)  = 0.80) and correlation coefficient (r=0.91), we recommended ensemble averaging with early stopping as an accurate NN modeling procedure for water quality prediction in similar studies. PMID:25572437

  17. Spatial and temporal patterns of subtidal and intertidal crabs excursions

    NASA Astrophysics Data System (ADS)

    Silva, A. C. F.; Boaventura, D. M.; Thompson, R. C.; Hawkins, S. J.

    2014-01-01

    Highly mobile predators such as fish and crabs are known to migrate from the subtidal zone to forage in the intertidal zone at high-tide. The extent and variation of these habitat linking movements along the vertical shore gradient have not been examined before for several species simultaneously, hence not accounting for species interactions. Here, the foraging excursions of Carcinus maenas (L.), Necora puber (Linnaeus, 1767) and Cancer pagurus (Linnaeus, 1758) were assessed in a one-year mark-recapture study on two replicated rocky shores in southwest U.K. A comparison between the abundance of individuals present on the shore at high-tide with those present in refuges exposed at low-tide indicated considerable intertidal migration by all species, showing strong linkage between subtidal and intertidal habitats. Estimates of population size based on recapture of marked individuals indicated that an average of ~ 4000 individuals combined for the three crab species, can be present on the shore during one tidal cycle. There was also a high fidelity of individuals and species to particular shore levels. Underlying mechanisms for these spatial patterns such as prey availability and agonistic interactions are discussed. Survival rates were estimated using the Cormack-Jolly-Seber model from multi-recapture analysis and found to be considerably high with a minimum of 30% for all species. Growth rates were found to vary intraspecifically with size and between seasons. Understanding the temporal and spatial variations in predation pressure by crabs on rocky shores is dependent on knowing who, when and how many of these commercially important crab species depend on intertidal foraging. Previous studies have shown that the diet of these species is strongly based on intertidal prey including key species such as limpets; hence intertidal crab migration could be associated with considerable impacts on intertidal assemblages.

  18. The Latitudinal Excursion of Coronal Magnetic Field Lines in Response to Differential Rotation: MHD Simulations

    NASA Technical Reports Server (NTRS)

    Lionello, Roberto; Linker, Jon A.; Mikic, Zoran; Riley, Pete

    2006-01-01

    Solar energetic particles, which are believed to originate from corotating interacting regions (CIRS) at low heliographic latitude, were observed by the Ulysses spacecraft even as it passed over the Sun's poles. One interpretation of this result is that high-latitude field lines intercepted by Ulysses connect to low-latitude CIRs at much larger heliocentric distances. The Fisk model explains the latitudinal excursion of magnetic field lines in the solar corona and heliosphere as the inevitable consequence of the interaction of a tilted dipole in a differentially rotating photosphere with rigidly rotating coronal holes. We use a time-dependent three-dimensional magnetohydrodynamic (MHD) algorithm to follow the evolution of a simple model of the solar corona in response to the differential rotation of the photospheric magnetic flux. We examine the changes of the coronal-hole boundaries, the redistribution of the line-of-sight magnetic field, and the precession of field lines in the corona. Our results confirm the basic idea of the Fisk model, that differential rotation leads to changes in the heliographic latitude of magnetic field lines. However, the latitudinal excursion of magnetic field lines in this simple "tilted dipole" model is too small to explain the Ulysses observations. Although coronal holes in our model rotate more rigidly than do photospheric features (in general agreement with observations), they do not rotate strictly rigidly as assumed by Fisk. This basic difference between our model and Fisk's will be explored in the future by considering more realistic magnetic flux distributions, as observed during Ulysses polar excursions.

  19. Hierarchical set of models to estimate soil thermal diffusivity

    NASA Astrophysics Data System (ADS)

    Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia

    2016-04-01

    Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different

  20. A Diagnostic Scoring Model for Leptospirosis in Resource Limited Settings

    PubMed Central

    Rajapakse, Senaka; Weeratunga, Praveen; Niloofa, Roshan; Fernando, Narmada; de Silva, Nipun Lakshitha; Rodrigo, Chaturaka; Maduranga, Sachith; Nandasiri, Nuwanthi; Premawansa, Sunil; Karunanayake, Lilani; de Silva, H. Janaka; Handunnetti, Shiroma

    2016-01-01

    Background Leptospirosis is a zoonotic infection with significant morbidity and mortality. The clinical presentation of leptospirosis is known to mimic the clinical profile of other prevalent tropical fevers. Laboratory confirmation of leptospirosis is based on the reference standard microscopic agglutination test (MAT), direct demonstration of the organism, and isolation by culture and DNA detection by polymerase chain reaction (PCR) amplification. However these methods of confirmation are not widely available in resource limited settings where the infection is prevalent, and reliance is placed on clinical features for provisional diagnosis. In this prospective study, we attempted to develop a model for diagnosis of leptospirosis, based on clinical features and standard laboratory test results. Methods The diagnostic score was developed based on data from a prospective multicentre study in two hospitals in the Western Province of Sri Lanka. All patients presenting to these hospitals with a suspected diagnosis of leptospirosis, based on the WHO surveillance criteria, were recruited. Confirmed disease was defined as positive genus specific MAT (Leptospira biflexa). A derivation cohort and a validation cohort were randomly selected from available data. Clinical and laboratory manifestations associated with confirmed leptospirosis in the derivation cohort were selected for construction of a multivariate regression model with correlation matrices, and adjusted odds ratios were extracted for significant variables. The odds ratios thus derived were subsequently utilized in the criteria model, and sensitivity and specificity examined with ROC curves. Results A total of 592 patients were included in the final analysis with 450 (180 confirmed leptospirosis) in the derivation cohort and 142 (52 confirmed leptospirosis) in the validation cohort. The variables in the final model were: history of exposure to a possible source of leptospirosis (adjusted OR = 2.827; 95% CI = 1

  1. A Whale of an Interest in Sea Creatures: The Learning Potential of Excursions

    ERIC Educational Resources Information Center

    Hedges, Helen

    2004-01-01

    Excursions, or field trips, are a common component of early childhood programs, seen as a means of enriching the curriculum by providing experiences with people, places, and things in the community. Although excursions have been used as a framework for research on children's memory development, research on the efficacy of excursions in terms of…

  2. Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2015-01-01

    Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.

  3. Geomagnetic excursion captured by multiple volcanoes in a monogenetic field

    NASA Astrophysics Data System (ADS)

    Cassidy, John

    2006-11-01

    Five monogenetic volcanoes within the Quaternary Auckland volcanic field are shown to have recorded a virtually identical but anomalous paleomagnetic direction (mean inclination and declination of 61.7° and 351.0°, respectively), consistent with the capture of a geomagnetic excursion. Based on documented rates of change of paleomagnetic field direction during excursions this implies that the volcanoes may have all formed within a period of only 50-100 years or less. These temporally linked volcanoes are widespread throughout the field and appear not to be structurally related. However, the general paradigm for the reawakening of monogenetic fields is that only a single new volcano or group of closely spaced vents is created, typically at intervals of several hundred years or more. Therefore, the results presented show that for any monogenetic field the impact of renewed eruptive activity may be significantly under-estimated, especially for potentially affected population centres and the siting of sensitive facilities.

  4. Final report for the flow excursion follow-on testing

    SciTech Connect

    Nash, C.A.; Walters, T.W.

    1992-08-05

    The purpose of the Mark 22 Flow Excursion Follow-On testing was to investigate the theory that approximately 15% of the flow bypassed the primary flow channels in previous testing, whereas the design called for only a 3% bypass. The results of the follow-on tests clearly confirmed this theory. The testing was performed in two phases. During the first phase, characterization tests performed during the earlier test program were repeated.

  5. Age of the Mono Lake excursion and associated tephra

    NASA Astrophysics Data System (ADS)

    Benson, Larry; Liddicoat, Joseph; Smoot, Joseph; Sarna-Wojcicki, Andrei; Negrini, Robert; Lund, Steve

    2003-02-01

    The Mono Lake excursion (MLE) is an important time marker that has been found in lake and marine sediments across much of the Northern Hemisphere. Dating of this event at its type locality, the Mono Basin of California, has yielded controversial results with the most recent effort concluding that the MLE may actually be the Laschamp excursion (Earth Planet. Sci. Lett. 197 (2002) 151). We show that a volcanic tephra (Ash ♯15) that occurs near the midpoint of the MLE has a date (not corrected for reservoir effect) of 28,620±300 14C yr BP (˜32,400 GISP2 yr BP) in the Pyramid Lake Basin of Nevada. Given the location of Ash ♯15 and the duration of the MLE in the Mono Basin, the event occurred between 31,500 and 33,300 GISP2 yr BP, an age range consistent with the position and age of the uppermost of two paleointensity minima in the NAPIS-75 stack that has been associated with the MLE (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009). The lower paleointensity minimum in the NAPIS-75 stack is considered to be the Laschamp excursion (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009).

  6. Age of the Mono Lake excursion and associated tephra

    USGS Publications Warehouse

    Benson, L.; Liddicoat, J.; Smoot, J.; Sarna-Wojcicki, A.; Negrini, R.; Lund, S.

    2003-01-01

    The Mono Lake excursion (MLE) is an important time marker that has been found in lake and marine sediments across much of the Northern Hemisphere. Dating of this event at its type locality, the Mono Basin of California, has yielded controversial results with the most recent effort concluding that the MLE may actually be the Laschamp excursion (Earth Planet. Sci. Lett. 197 (2002) 151). We show that a volcanic tephra (Ash #15) that occurs near the midpoint of the MLE has a date (not corrected for reservoir effect) of 28,620 ?? 300 14C yr BP (??? 32,400 GISP2 yr BP) in the Pyramid Lake Basin of Nevada. Given the location of Ash #15 and the duration of the MLE in the Mono Basin, the event occurred between 31,500 and 33,300 GISP2 yr BP, an age range consistent with the position and age of the uppermost of two paleointensity minima in the NAPIS-75 stack that has been associated with the MLE (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009). The lower paleointensity minimum in the NAPIS-75 stack is considered to be the Laschamp excursion (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009).

  7. Models of Music Therapy Intervention in School Settings

    ERIC Educational Resources Information Center

    Wilson, Brian L., Ed.

    2002-01-01

    This completely revised 2nd edition edited by Brian L. Wilson, addresses both theoretical issues and practical applications of music therapy in educational settings. 17 chapters written by a variety of authors, each dealing with a different setting or issue. A valuable resource for demonstrating the efficacy of music therapy to school…

  8. High-resolution record of the Laschamp geomagnetic excursion at the Blake-Bahama Outer Ridge

    NASA Astrophysics Data System (ADS)

    Bourne, Mark D.; Mac Niocaill, Conall; Thomas, Alex L.; Henderson, Gideon M.

    2013-12-01

    Geomagnetic excursions are brief deviations of the geomagnetic field from behaviour expected during `normal secular' variation. The Laschamp excursion at ˜41 ka was one such deviation. Previously published records suggest rapid changes in field direction and a concurrent substantial decrease in field intensity associated with this excursion. Accurate dating of excursions, and determination of their durations from multiple locations, is vital to our understanding of global field behaviour during these deviations. We present here high-resolution palaeomagnetic records of the Laschamp excursion obtained from two Ocean Drilling Program (ODP) Sites, 1061 and 1062 on the Blake-Bahama Outer Ridge (ODP Leg 172). High sedimentation rates (˜30-40 cm kyr-1) at these locations allow determination of transitional field behaviour during the excursion. Palaeomagnetic measurements of discrete samples from four cores reveal a single excursional feature, across an interval of 30 cm, associated with a broader palaeointensity low. We determine the age and duration of the Laschamp excursion using a stratigraphy linked to the δ18O record from the Greenland ice cores. This chronology dates the Laschamp excursion at the Blake Ridge to 41.3 ka. The excursion is characterized by rapid transitions (less than 200 yr) between stable normal polarity and a partially reversed polarity state. The palaeointensity record is in good agreement between the two sites, revealing two prominent minima. The first minimum is associated with the Laschamp excursion at 41 ka and the second corresponds to the Mono Lake excursion at ˜35.5 ka. We determine that the directional excursion during the Laschamp at this location was no longer than ˜400 yr, occurring within a palaeointensity minimum that lasted 2000 yr. The Laschamp excursion at this location is much shorter in duration than the Blake and Iceland Basin excursions.

  9. Influences to System and Superiority of Model Parameters in Dynamic Setting AGC

    NASA Astrophysics Data System (ADS)

    Xu, LI; Hao-yu, Zhang; Jin, Zhang; Jian-fei, Guo; Cui-hong, Liu; Ping-wen, Cheng

    Dynamic setting AGC is based on the improvement of the BISRA-AGC model. Based on the control algorithms and control models of dynamic setting AGC, the influences of the model parameters to system performance were analyzed using the GUI in MATLAB. At the same time, the superiority and limitation of the dynamic setting AGC were analyzed.

  10. Judgmental Standard Setting Using a Cognitive Components Model.

    ERIC Educational Resources Information Center

    McGinty, Dixie; Neel, John H.

    A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…

  11. Attachment Disorders: A Proposed Model for the School Setting.

    ERIC Educational Resources Information Center

    Parker, Kandis Cooke

    This paper explores the literature on attachment disorders in order to discover if the school setting can be an appropriate treatment option for children with mild attachment disorders, and in order to investigate how counselors can implement this treatment option. The introduction discusses the effects of recent changes in family structure on…

  12. Modeling mania in preclinical settings: A comprehensive review.

    PubMed

    Sharma, Ajaykumar N; Fries, Gabriel R; Galvez, Juan F; Valvassori, Samira S; Soares, Jair C; Carvalho, André F; Quevedo, Joao

    2016-04-01

    The current pathophysiological understanding of mechanisms leading to onset and progression of bipolar manic episodes remains limited. At the same time, available animal models for mania have limited face, construct, and predictive validities. Additionally, these models fail to encompass recent pathophysiological frameworks of bipolar disorder (BD), e.g. neuroprogression. Therefore, there is a need to search for novel preclinical models for mania that could comprehensively address these limitations. Herein we review the history, validity, and caveats of currently available animal models for mania. We also review new genetic models for mania, namely knockout mice for genes involved in neurotransmission, synapse formation, and intracellular signaling pathways. Furthermore, we review recent trends in preclinical models for mania that may aid in the comprehension of mechanisms underlying the neuroprogressive and recurring nature of BD. In conclusion, the validity of animal models for mania remains limited. Nevertheless, novel (e.g. genetic) animal models as well as adaptation of existing paradigms hold promise. PMID:26545487

  13. Linear and Nonlinear Models of Agenda Setting in Television.

    ERIC Educational Resources Information Center

    Brosius, Hans-Bernd; Kepplinger, Hans Mathias

    1992-01-01

    A content analysis of major German television news shows and 53 weekly surveys on 16 issues were used to compare linear and nonlinear models as ways to describe the relationship between media coverage and the public agenda. Results indicate that nonlinear models are in some cases superior to linear models in terms of explained variance. (34…

  14. First-excursion probability in non-stationary random vibration.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1973-01-01

    The first-excursion probability of a non-stationary Gaussian process with zero mean has been studied. Within the framework of the point process approach, a variety of analytical approximations applicable to stationary random processes is extended herein to non-stationary random processes. The extension is possible owing to a recent definition of non-stationary envelope processes proposed by the author. With the aid of numerical examples, merits of each approximation are examined by comparing with the results of simulation. It is found that under non-stationary excitations with short duration, the Markov approximation is the best among all the approximations discussed in this paper.

  15. Isolating causes of yield excursions with decision tress and commonality

    NASA Astrophysics Data System (ADS)

    Waksman, Peter

    2002-07-01

    The use of Decision Trees to analyze data is discussed as an approach to solving problems of Yield excursions in semiconductor manufacturing. The relation to equipment commonality is discussed along with some of the pitfalls of incautious use of general probability estimates. The paper introduces a Mixing Diagram to help visualize commonality issues, it introduces work around methods for resolving ambiguities in the commonality, it reviews Decision Trees algorithms, and it ends with a discussion of current limitations of the method along with recommendations for future research.

  16. A geometric level set model for ultrasounds analysis

    SciTech Connect

    Sarti, A.; Malladi, R.

    1999-10-01

    We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.

  17. Modeling a set of heavy oil aqueous pyrolysis experiments

    SciTech Connect

    Thorsness, C.B.; Reynolds, J.G.

    1996-11-01

    Aqueous pyrolysis experiments, aimed at mild upgrading of heavy oil, were analyzed using various computer models. The primary focus of the analysis was the pressure history of the closed autoclave reactors obtained during the heating of the autoclave to desired reaction temperatures. The models used included a means of estimating nonideal behavior of primary components with regard to vapor liquid equilibrium. The modeling indicated that to match measured autoclave pressures, which often were well below the vapor pressure of water at a given temperature, it was necessary to incorporate water solubility in the oil phase and an activity model for the water in the oil phase which reduced its fugacity below that of pure water. Analysis also indicated that the mild to moderate upgrading of the oil which occurred in experiments that reached 400{degrees}C or more using a FE(III) 2-ethylhexanoate could be reasonably well characterized by a simple first order rate constant of 1.7xl0{sup 8} exp(-20000/T)s{sup {minus}l}. Both gas production and API gravity increase were characterized by this rate constant. Models were able to match the complete pressure history of the autoclave experiments fairly well with relatively simple equilibria models. However, a consistent lower than measured buildup in pressure at peak temperatures was noted in the model calculations. This phenomena was tentatively attributed to an increase in the amount of water entering the vapor phase caused by a change in its activity in the oil phase.

  18. Flexor tendon excursion and load during passive and active simulated motion: a cadaver study.

    PubMed

    Sapienza, A; Yoon, H K; Karia, R; Lee, S K

    2013-11-01

    The aim of this study was to quantify the amount of tendon excursion and load experienced during simulated active and passive rehabilitation exercises. Six cadaver specimens were utilized to examine tendon excursion and load. Lateral fluoroscopic images were used to measure the excursions of metal markers placed in the flexor digitorum superficialis and profundus tendons of the index, middle, and ring fingers. Measurements were performed during ten different passive and active simulated motions. Mean tendon forces were higher in all active versus passive movements. Blocking movements placed the highest loads on the flexor tendons. Active motion resulted in higher tendon excursion than did passive motion. Simulated hook position resulted in the highest total tendon excursion and the highest inter-tendinous excursion. This knowledge may help optimize the management of the post-operative exercise therapy regimen. PMID:23221181

  19. Writing Models: Strategies for Writing Composition in Inclusive Settings.

    ERIC Educational Resources Information Center

    Staal, Laura A.

    2001-01-01

    Considers how expressive written language is considered one of the most difficult areas of academic achievement for children, especially those with learning disabilities. Discusses two narrative writing models: the story frame and the story pyramid. (SG)

  20. Analysis of energy balance models using the ERBE data set

    NASA Technical Reports Server (NTRS)

    Graves, Charles E.; North, Gerald R.

    1991-01-01

    A review of Energy Balance Models is presented. Results from the Outgoing Longwave Radiation parameterization are discussed. The albedo parameterizations and the consequences of the new parameterizations are examined.

  1. Geolab in NASA's First Generation Pressurized Excursion Module: Operational Concepts

    NASA Technical Reports Server (NTRS)

    Evans, C. A.; Bell, M. S.; Calway, M. J.

    2010-01-01

    We are building a prototype laboratory for preliminary examination of geological samples to be integrated into a first generation Habitat Demonstration Unit-1/Pressurized Excursion Module (HDU1-PEM) in 2010. The laboratory GeoLab will be equipped with a glovebox for handling samples, and a suite of instruments for collecting preliminary data to help characterize those samples. The GeoLab and the HDU1-PEM will be tested for the first time as part of the 2010 Desert Research and Technology Studies (DRATS), NASAs annual field exercise designed to test analog mission technologies. The HDU1-PEM and GeoLab will participate in joint operations in northern Arizona with two Lunar Electric Rovers (LER) and the DRATS science team. Historically, science participation in DRATS exercises has supported the technology demonstrations with geological traverse activities that are consistent with preliminary concepts for lunar surface science Extravehicular Activities (EVAs). Next years HDU1-PEM demonstration is a starting point to guide the development of requirements for the Lunar Surface Systems Program and test initial operational concepts for an early lunar excursion habitat that would follow geological traverses along with the LER. For the GeoLab, these objectives are specifically applied to enable future geological surface science activities. The goal of our GeoLab is to enhance geological science returns with the infrastructure that supports preliminary examination, early analytical characterization of key samples, insight into special considerations for curation, and data for prioritization of lunar samples for return to Earth.

  2. Self-reversal and apparent magnetic excursions in Arctic sediments

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.; Xuan, C.

    2009-06-01

    The Arctic oceans have been fertile ground for the recording of apparent excursions of the geomagnetic field, implying that the high latitude field had unusual characteristics at least over the last 1-2 Myrs. Alternating field demagnetization of the natural remanent magnetization (NRM) of Core HLY0503-6JPC from the Mendeleev Ridge (Arctic Ocean) implies the presence of primary magnetizations with negative inclination apparently recording excursions in sediments deposited during the Brunhes Chron. Thermal demagnetization, on the other hand, indicates the presence of multiple (often anti-parallel) magnetization components with negative inclination components having blocking temperatures predominantly, but not entirely, below ~ 350 °C. Thermo-magnetic tests, X-ray diffraction (XRD) and scanning electron microscopy (SEM) indicate that the negative inclination components are carried by titanomaghemite, presumably formed by seafloor oxidation of titanomagnetite. The titanomaghemite apparently carries a chemical remanent magnetization (CRM) that is partially self-reversed relative to the detrital remanent magnetization (DRM) carried by the host titanomagnetite. The partial self-reversal could have been accomplished by ionic ordering during oxidation, thereby changing the balance of the magnetic moments in the ferrimagnetic sublattices.

  3. Comparison of Diaphragmatic Breathing Exercise, Volume and Flow Incentive Spirometry, on Diaphragm Excursion and Pulmonary Function in Patients Undergoing Laparoscopic Surgery: A Randomized Controlled Trial

    PubMed Central

    Anand, R.

    2016-01-01

    Objective. To evaluate the effects of diaphragmatic breathing exercises and flow and volume-oriented incentive spirometry on pulmonary function and diaphragm excursion in patients undergoing laparoscopic abdominal surgery. Methodology. We selected 260 patients posted for laparoscopic abdominal surgery and they were block randomization as follows: 65 patients performed diaphragmatic breathing exercises, 65 patients performed flow incentive spirometry, 65 patients performed volume incentive spirometry, and 65 patients participated as a control group. All of them underwent evaluation of pulmonary function with measurement of Forced Vital Capacity (FVC), Forced Expiratory Volume in the first second (FEV1), Peak Expiratory Flow Rate (PEFR), and diaphragm excursion measurement by ultrasonography before the operation and on the first and second postoperative days. With the level of significance set at p < 0.05. Results. Pulmonary function and diaphragm excursion showed a significant decrease on the first postoperative day in all four groups (p < 0.001) but was evident more in the control group than in the experimental groups. On the second postoperative day pulmonary function (Forced Vital Capacity) and diaphragm excursion were found to be better preserved in volume incentive spirometry and diaphragmatic breathing exercise group than in the flow incentive spirometry group and the control group. Pulmonary function (Forced Vital Capacity) and diaphragm excursion showed statistically significant differences between volume incentive spirometry and diaphragmatic breathing exercise group (p < 0.05) as compared to that flow incentive spirometry group and the control group. Conclusion. Volume incentive spirometry and diaphragmatic breathing exercise can be recommended as an intervention for all patients pre- and postoperatively, over flow-oriented incentive spirometry for the generation and sustenance of pulmonary function and diaphragm excursion in the management of laparoscopic

  4. Comparison of Diaphragmatic Breathing Exercise, Volume and Flow Incentive Spirometry, on Diaphragm Excursion and Pulmonary Function in Patients Undergoing Laparoscopic Surgery: A Randomized Controlled Trial.

    PubMed

    Alaparthi, Gopala Krishna; Augustine, Alfred Joseph; Anand, R; Mahale, Ajith

    2016-01-01

    Objective. To evaluate the effects of diaphragmatic breathing exercises and flow and volume-oriented incentive spirometry on pulmonary function and diaphragm excursion in patients undergoing laparoscopic abdominal surgery. Methodology. We selected 260 patients posted for laparoscopic abdominal surgery and they were block randomization as follows: 65 patients performed diaphragmatic breathing exercises, 65 patients performed flow incentive spirometry, 65 patients performed volume incentive spirometry, and 65 patients participated as a control group. All of them underwent evaluation of pulmonary function with measurement of Forced Vital Capacity (FVC), Forced Expiratory Volume in the first second (FEV1), Peak Expiratory Flow Rate (PEFR), and diaphragm excursion measurement by ultrasonography before the operation and on the first and second postoperative days. With the level of significance set at p < 0.05. Results. Pulmonary function and diaphragm excursion showed a significant decrease on the first postoperative day in all four groups (p < 0.001) but was evident more in the control group than in the experimental groups. On the second postoperative day pulmonary function (Forced Vital Capacity) and diaphragm excursion were found to be better preserved in volume incentive spirometry and diaphragmatic breathing exercise group than in the flow incentive spirometry group and the control group. Pulmonary function (Forced Vital Capacity) and diaphragm excursion showed statistically significant differences between volume incentive spirometry and diaphragmatic breathing exercise group (p < 0.05) as compared to that flow incentive spirometry group and the control group. Conclusion. Volume incentive spirometry and diaphragmatic breathing exercise can be recommended as an intervention for all patients pre- and postoperatively, over flow-oriented incentive spirometry for the generation and sustenance of pulmonary function and diaphragm excursion in the management of laparoscopic

  5. Non-deterministic fatigue life analysis using convex set models

    NASA Astrophysics Data System (ADS)

    Sun, WenCai; Yang, ZiChun; Li, KunFeng

    2013-04-01

    The non-probabilistic approach to fatigue life analysis was studied using the convex models—interval, ellipsoidal and multi-convex models. The lower and upper bounds of the fatigue life were obtained by using the second-order Taylor series and Lagrange multiplier method. The solving process for derivatives of the implicit life function was presented. Moreover, a median ellipsoidal model was proposed which can take into account the sample blind zone and almost impossibility of concurrence of some small probability events. The Monte Carlo method for multi-convex model was presented, an important alternative when the analytical method does not work. A project example was given. The feasibility and rationality of the presented approach were verified. It is also revealed that the proposed method is conservative compared to the traditional probabilistic method, but it is a useful complement when it is difficult to obtain the accurate probability densities of parameters.

  6. Regional Dimensions of the Triple Helix Model: Setting the Context

    ERIC Educational Resources Information Center

    Todeva, Emanuela; Danson, Mike

    2016-01-01

    This paper introduces the rationale for the special issue and its contributions, which bridge the literature on regional development and the Triple Helix model. The concept of the Triple Helix at the sub-national, and specifically regional, level is established and examined, with special regard to regional economic development founded on…

  7. Leader Succession: A Model and Review for School Settings.

    ERIC Educational Resources Information Center

    Miskel, Cecil; Cosgrove, Dorothy

    Recent research casts doubt on the commonly held notions that administrators affect student learning through instructional leadership and that changing administrators will improve school performance. To help construct a model for examining the process of leader succession that specifies a number of major school process and outcome variables…

  8. Control and Synchronization of Julia Sets in the Forced Brusselator Model

    NASA Astrophysics Data System (ADS)

    Sun, Weihua; Zhang, Yongping

    The forced Brusselator model is investigated from the fractal viewpoint. A Julia set of the discrete version of the Brusselator model is introduced and control of the Julia set is presented by using feedback control. In order to discuss the relations of two different Julia sets, a coupled item is designed to realize the synchronization of two Julia sets with different parameters, which provides a method to discuss the relation and the changing of two different Julia sets, one Julia set can be changed to be the other. Numerical simulations are used to verify the effectiveness of these methods.

  9. [Setting up of a modelling workshop in an Alzheimer's unit].

    PubMed

    Guérin-Billard, Anne; Deray, Céline; Denis, Amael; Guillon, Emmanuelle

    2012-01-01

    Modelling is an original and creative form of therapy and above all one which is accessible when the limits of cognitive care have been reached. Salt dough is a malleable, sensitive and multi-sensory mediator which is forgiving of errors. Without the use of any known technique or objective as a reference, this activity avoids any notion of failure. This workshop is an area for expression and care and the mediation is interesting for its therapeutic potential. PMID:22519136

  10. Buried-euxenic-basin model sets Tarim basin potential

    SciTech Connect

    Hsu, K.J. )

    1994-11-28

    The Tarim basin is the largest of the three large sedimentary basins of Northwest China. The North and Southwest depressions of Tarim are underlain by thick sediments and very thin crust. The maximum sediment thickness is more than 15 km. Of the several oil fields of Tarim, the three major fields were discovered during the last decade, on the north flank of the North depression and on the Central Tarim Uplift. The major targets of Tarim, according to the buried-euxenic-basin model, should be upper Paleozoic and lower Mesozoic reservoirs trapping oil and gas condensates from lower Paleozoic source beds. The paper describes the basin and gives a historical perspective of exploration activities and discoveries. It then explains how this basin can be interpreted by the buried-euxenic-basin model. The buried-euxenic-basin model postulates four stages of geologic evolution: (1) Sinian and early Paleozoic platform sedimentation on relic arcs and deep-marine sedimentation in back-arc basins in Xinjiang; (2) Late Paleozoic foreland-basin sedimentation in north Tarim; (3) Mesozoic and Paleogene continental deposition, subsidence under sedimentary load; and (4) Neogene pull-apart basin, wrench faulting and extension.

  11. A Move to an Innovative Games Teaching Model: Style E Tactical (SET)

    ERIC Educational Resources Information Center

    Nathan, Sanmuga; Haynes, John

    2013-01-01

    This paper reports on the development and testing of a hybrid model of teaching games--The Style "E" Tactical (SET) Model. The SET is a combination of two pedagogical approaches: Mosston and Ashworth's Teaching Styles and Bunker and Thorpe's Teaching Games for Understanding (TGfU). To test the efficacy of this new model, the…

  12. Causal Inference and Model Selection in Complex Settings

    NASA Astrophysics Data System (ADS)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  13. Implicit level set algorithms for modelling hydraulic fracture propagation.

    PubMed

    Peirce, A

    2016-10-13

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. PMID:27597787

  14. Polarity and Excursion Transitions: Can they be Adequately Recorded in High-Sedimentation-Rate Marine Sediments?

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.

    2014-12-01

    Polarity transitions and magnetic excursions have durations of a few thousand years, or less. Transition/excursion records in volcanic sequences are, at best, partial snap-shots of the transition/excursion field. Records from high-sedimentation-rate marine sediments may be more continuous but they are always smoothed by progressive acquisition of detrital remanent magnetization (DRM), and by sampling/measurement limitations. North Atlantic records of the Matuyama-Brunhes (M-B) polarity transition are compared with records of the Iceland Basin excursion (190 ka). Virtual geomagnetic polar (VGP) paths are used to map characteristic magnetization directions during the transition/excursion. Relative paleointensity (RPI) proxies indicate partial recovery of field intensity during the transition/excursion, with RPI minima coinciding with abrupt VGP shifts at the onset and end of the transition/excursion. Discrepancies in VGP paths among holes at the same site, among sites, and a comparison of u-channel and discrete sample measurements, reveal limitations in resolution of the transition/excursion fields. During the M-B polarity transition, VGP clusters appear in the NW Pacific, NE Asia and in the South Atlantic. Similarities in VGP clustering for the M-B boundary and the Iceland Basin excursion imply that the polarity transition and excursion fields had common characteristics. Similarities with the modern non-axial dipole (NAD) field imply that polarity transitions and excursions involve the demise of the Earth's axial dipole relative to the NAD field, and that the NAD field has long-lasting features locked in place by the lowermost mantle.

  15. High-resolution palaeomagnetic records of the Laschamp geomagnetic excursion from ODP Sites 1061 and 1062

    NASA Astrophysics Data System (ADS)

    Bourne, M. D.; Henderson, G. M.; Thomas, A. L.; Mac Niocaill, C.

    2012-12-01

    The Laschamp geomagnetic excursion (~41 ka) was a brief global deviation in geomagnetic field behaviour from that expected during normal secular variation. Previously published records suggest rapid changes in field direction and a concurrent substantial decrease in field intensity. We present here high-resolution palaeomagnetic records of the Laschamp excursion obtained from two Ocean Drilling Program (ODP) Sites 1061 and 1062 on the Blake-Bahama Outer Ridge (ODP Leg 172) and compare this record with previously published records of the Blake and Iceland Basin Excursions. Relatively high sedimentation rates (>10 cm kyr-1) at these locations allow the determination of transitional field behaviour during the excursion. Rather than assuming a constant sedimentation rate between assigned age tie-points, we employ measurements of 230Thxs concentration in the sediment to assess variations in the sedimentation rates through the core sections of interest. This allows us to better determine the temporal behaviour of the Laschamp excursion with greater accuracy and known uncertainty. The Laschamp excursion at this location appears to be much shorter in duration than the Blake and Iceland Basin excursions. Palaeomagnetic measurements of discrete samples from four cores reveal a single excursional feature, across an interval of 30 cm, associated with a broader palaeointensity low. The excursion is characterised by rapid transitions (less than 500 years) between a stable normal polarity and a partially-reversed, polarity. Peaks in inclination either side of the directional excursion indicate periods of time when the local field is dominated by vertical flux patches. Similar behaviour has been observed in records of the Iceland Basin Excursion from the same region. The palaeointensity record is in good agreement between the two sites. The palaeointensity record shows two minima, where the second dip in intensity is associated with a more limited directional deviation. Similar

  16. Multiple High-Frequency Carbon Isotope Excursions Across the Precambrian-Cambrian Boundary: Implications for Correlations and Environmental Change

    NASA Astrophysics Data System (ADS)

    Smith, E. F.; Macdonald, F. A.; Schrag, D. P.; Laakso, T.

    2014-12-01

    The GSSP Precambrian-Cambrian boundary in Newfoundland is defined by the first appearance datum (FAD) of Treptichnus pedum, which is considered to be roughly coincident with the FAD of small shelly fossils (SSFs) and a large negative carbon isotope excursion. An association between the FAD of T. pedum and a negative carbon isotope excursion has previously been documented in Northwest Canada (Narbonne et al., 1994) and Death Valley (Corsetti and Hagadorn, 2000), and since then has been used as an chronostratigraphic marker of the boundary, particularly in siliciclastic poor sections that do not preserve T. pedum. Here we present new high-resolution carbon isotope (δ13C ) chemostratigraphy from multiple sections in western Mongolia and the western United States that span the Ediacaran-Cambrian transition. High-resolution sampling (0.2-1 m) reveals that instead of one large negative excursion, there are multiple, high-frequency negative excursions with an overall negative trend during the latest Ediacaran. These data help to more precisely calibrate changes in the carbon cycle across the boundary as well as to highlight the potential problem of identifying the boundary with just a few negative δ13C values. We then use a simple carbon isotope box model to explore relationships between phosphorous delivery to the ocean, oxygenation, alkalinity, and turnovers in carbonate secreting organisms. Corsetti, F.A., and Hagadorn, J.W., 2000, Precambrian-Cambrian transition: Death Valley, United States: Geology, v. 28, no. 4, p. 299-302. Narbonne, G.M., Kaufman, A.J., and Knoll, A.H., 1994, Integrated chemostratigraphy and biostratigraphy of the Windermere Supergroup, northwestern Canada: Implications for Neoproterozoic correlations and the early evolution of animals: Geological Society of America Bulletin, v. 106, no. 10, p. 1281-1292.

  17. The Earth climate and life evolution response to cosmic radiation enhancement arising from reversals and excursions of geomagnetic field

    NASA Astrophysics Data System (ADS)

    Kuznetsova, N.

    Climate abrupt warming as well as biologic evolutionary events in respect to fauna and human evolution are shown to originate during reversals and excursions of geomagnetic field when the geomagnetic field loses a lot in its module value and consequently in its protective characteristics making galactic cosmic rays GCR and solar protons penetration into the Earth atmosphere possible Usually preceded by climate cooling and populations reduction reversals and excursions stimulate evolutionary genetic mutations generated by intense radiation and climate abrupt warming resulted from destruction of stratospheric aerosols by GCR Favorable environment conditions on new features and species origin For example it was Gauss-Matuyama reversal 2 3 Myr to make for Hominid evolutionary mutations and for distinctly new species Homo erectus origin The evolutionary events and climate shifts appear explicable on the context of the fundamentally new model of the geomagnetic field generation based on hypothesis of the hot Earth and the theory of the Earth magnetic poles drift throughout reversals and excursions theory

  18. Geolab 2010 Hardware in NASA's Pressurized Excursion Module

    NASA Technical Reports Server (NTRS)

    Calaway, M. J.; Evans, C. A.; Bell, M. S.

    2010-01-01

    NASA is designing and building the Habitat Demonstration Unit - 1 in a Pressurized Excursion Module (HDU1- PEM) configuration for analog testing at NASA s annual Desert Research and Technology Studies (DRATS) near Flagstaff, Arizona in late summer 2010. The HDU1-PEM design is based on NASA Constellation program s Lunar Scenario 12.1 (nicknamed "Lunabago"). This scenario uses Lunar Electric Rovers (LER) and a PEM for long-distance lunar exploration. The 2010 HDU version 1.0 will be an unpressurized PEM that contains a GeoLab in one of the eight PEM sections. The GeoLab is designed to facilitate sample curation protocol development including such activities as preliminary examination, sample archiving, and high grading of astromaterials for return to Earth. Geolab operations will be integrated with the DRATS science traverses and operations.

  19. Evolution of Autocatalytic Sets in Computational Models of Chemical Reaction Networks.

    PubMed

    Hordijk, Wim

    2016-06-01

    Several computational models of chemical reaction networks have been presented in the literature in the past, showing the appearance and (potential) evolution of autocatalytic sets. However, the notion of autocatalytic sets has been defined differently in different modeling contexts, each one having some shortcoming or limitation. Here, we review four such models and definitions, and then formally describe and analyze them in the context of a mathematical framework for studying autocatalytic sets known as RAF theory. The main results are that: (1) RAF theory can capture the various previous definitions of autocatalytic sets and is therefore more complete and general, (2) the formal framework can be used to efficiently detect and analyze autocatalytic sets in all of these different computational models, (3) autocatalytic (RAF) sets are indeed likely to appear and evolve in such models, and (4) this could have important implications for a possible metabolism-first scenario for the origin of life. PMID:26499126

  20. Evolution of Autocatalytic Sets in Computational Models of Chemical Reaction Networks

    NASA Astrophysics Data System (ADS)

    Hordijk, Wim

    2016-06-01

    Several computational models of chemical reaction networks have been presented in the literature in the past, showing the appearance and (potential) evolution of autocatalytic sets. However, the notion of autocatalytic sets has been defined differently in different modeling contexts, each one having some shortcoming or limitation. Here, we review four such models and definitions, and then formally describe and analyze them in the context of a mathematical framework for studying autocatalytic sets known as RAF theory. The main results are that: (1) RAF theory can capture the various previous definitions of autocatalytic sets and is therefore more complete and general, (2) the formal framework can be used to efficiently detect and analyze autocatalytic sets in all of these different computational models, (3) autocatalytic (RAF) sets are indeed likely to appear and evolve in such models, and (4) this could have important implications for a possible metabolism-first scenario for the origin of life.

  1. Model Developments for Development of Improved Emissions Scenarios: Developing Purchasing-Power Parity Models, Analyzing Uncertainty, and Developing Data Sets for Gridded Integrated Assessment Models

    SciTech Connect

    Yang, Zili; Nordhaus, William

    2009-03-19

    In the duration of this project, we finished the main tasks set up in the initial proposal. These tasks include: setting up the basic platform in GAMS language for the new RICE 2007 model; testing various model structure of RICE 2007; incorporating PPP data set in the new RICE model; developing gridded data set for IA modeling.

  2. Trunk-Rotation Differences at Maximal Reach of the Star Excursion Balance Test in Participants With Chronic Ankle Instability

    PubMed Central

    de la Motte, Sarah; Arnold, Brent L.; Ross, Scott E.

    2015-01-01

    Context: Functional reach on the Star Excursion Balance Test is decreased in participants with chronic ankle instability (CAI). However, comprehensive 3-dimensional kinematics associated with these deficits have not been reported. Objective: To determine if lower extremity kinematics differed in CAI participants during anteromedial, medial, and posteromedial reach on the Star Excursion Balance Test. Design: Case-control study. Setting: Sports medicine research laboratory. Patients or Other Participants: Twenty CAI participants (age = 24.15 ± 3.84 years, height = 168.95 ± 11.57 cm, mass = 68.95 ± 16.29 kg) and 20 uninjured participants (age = 25.65 ± 5.58 years, height = 170.14 ± 8.75 cm, mass = 69.89 ± 10.51 kg) with no history of ankle sprain. We operationally defined CAI as repeated episodes of ankle “giving way” or “rolling over” or both, regardless of neuromuscular deficits or pathologic laxity. All CAI participants scored ≤26 on the Cumberland Ankle Instability Tool. Intervention(s): Star Excursion Balance Test reaches in the anteromedial, medial, and posteromedial directions. The CAI participants used the unstable side as the stance leg. Control participants were sex, height, mass, and side matched to the CAI group. The 3-dimensional kinematics were assessed with a motion-capture system. Main Outcome Measure(s): Group differences on normalized reach distance, trunk, pelvis, and hip-, knee-, and ankle-joint angles at maximum Star Excursion Balance Test reach. Results: No reach-distance differences were detected between CAI and uninjured participants in any of the 3 reach directions. With anteromedial reach, trunk rotation (t1,38 = 3.06, P = .004), pelvic rotation (t1,38 = 3.17, P = .003), and hip flexion (t1,38 = 2.40, P = .002) were greater in CAI participants. With medial reach, trunk flexion (t1,38 = 6.39, P = .05) was greater than for uninjured participants. No differences were seen with posteromedial reach. Conclusions: We did not detect

  3. Assessing the Reliability of Ultrasound Imaging to Examine Radial Nerve Excursion.

    PubMed

    Kasehagen, Ben; Ellis, Richard; Mawston, Grant; Allen, Scott; Hing, Wayne

    2016-07-01

    Ultrasound imaging allows cost effective in vivo analysis for quantifying peripheral nerve excursion. This study used ultrasound imaging to quantify longitudinal radial nerve excursion during various active and passive wrist movements in healthy participants. Frame-by-frame cross-correlation software allowed calculation of nerve excursion from video sequences. The reliability of ultrasound measurement of longitudinal radial nerve excursion was moderate to high (intraclass correlation coefficient range = 0.63-0.86, standard error of measurement 0.19-0.48). Radial nerve excursion ranged from 0.41 to 4.03 mm induced by wrist flexion and 0.28 to 2.91 mm induced by wrist ulnar deviation. No significant difference was seen in radial nerve excursion during either wrist movement (p > 0.05). Wrist movements performed in forearm supination produced larger overall nerve excursion (1.41 ± 0.32 mm) compared with those performed in forearm pronation (1.06 ± 0.31 mm) (p < 0.01). Real-time ultrasound is a reliable, cost-effective, in vivo method for analysis of radial nerve excursion. PMID:27087692

  4. Place-Responsive Pedagogy: Learning from Teachers' Experiences of Excursions in Nature

    ERIC Educational Resources Information Center

    Mannion, Greg; Fenwick, Ashley; Lynch, Jonathan

    2013-01-01

    The nature-based excursion has been a significant teaching strategy in environmental education for decades. This article draws upon empirical data from a collaborative research project where teachers were encouraged to visit natural areas to provide an understanding of their roles and experiences of planning and enacting excursions. The analysis…

  5. An Exploration of the Value of an Educational Excursion for Pre-Service Teachers

    ERIC Educational Resources Information Center

    de Beer, Josef; Petersen, Nadine; Dubar-Krige, Helen

    2012-01-01

    This paper addresses the question: What is the value of an educational excursion for first year students enrolled in a 4 year pre-service professional teacher education degree at the University of Johannesburg in South Africa? The excursion is an integral part of a first year module that focuses on the personal and professional development of…

  6. Be a Healthy Role Model for Children: 10 Tips for Setting Good Examples

    MedlinePlus

    ... tips Nutrition Education Series be a healthy role model for children 10 tips for setting good examples ... replacement foods. 10 be a good food role model Try new foods yourself. Describe its taste, texture, ...

  7. Development of an Optimum Tracer Set for Apportioning Emissions of Individual Power Plants Using Highly Time-Resolved Measurements and Advanced Receptor Modeling

    SciTech Connect

    John Ondov; Gregory Beachley

    2007-07-05

    In previous studies, 11 elements (Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Se, and Zn) were determined in 30-minute aerosol samples collected with the University of Maryland Semicontinuous Elements in Aerosol Sampler (SEAS; Kidwell and Ondov, 2001, 2004; SEAS-II) in several locations in which air quality is influenced by emissions from coal- or oil-fired power plants. At this time resolution, plumes from stationary high temperature combustion sources are readily detected as large excursions in ambient concentrations of elements emitted by these sources (Pancras et al. ). Moreover, the time-series data contain intrinsic information on the lateral diffusion of the plume (e.g., {sigma}{sub y}), which Park et al. (2005 and 2006) have exploited in their Pseudo-Deterministic Receptor Model (PDRM), to calculate emission rates of SO{sub 2} and 11 elements (mentioned above) from four individual coal- and oil-fired power plants in the Tampa Bay area. In the current project, we proposed that the resolving power of source apportionment methods might be improved by expanding the set of maker species and that there exist some optimum set of marker species that could be used. The ultimate goal was to determine the utility of using additional elements to better identify and isolate contributions of individual power plants to ambient levels of PM and its constituents. And, having achieved better resolution, achieve, also, better emission rate estimates. In this study, we optimized sample preparation and instrumental protocols for simultaneous analysis of 28 elements in dilute slurry samples collected with the SEAS with a new state-of-the-art Thermo-Systems, Inc., X-series II, Inductively Coupled Plasma Mass Spectroscopy (ICP-MS), and reanalyzed the samples previously collected in Tampa during the modeling period studied by Park et al. (2005) in which emission rates from four coal- and oil-fired power plants affected air quality at the sampling site. In the original model, Park et al

  8. Video Self-Modeling Intervention in School-Based Settings: A Review.

    ERIC Educational Resources Information Center

    Hitchcock, Caryl H.; Dowrick, Peter W.; Prater, Mary Anne

    2003-01-01

    This review examined 18 studies in which video self-modeling was applied in school-based settings. These studies verify the functional control of targeted academic skills and behavior/s and support the efficacy of video self-modeling to improve student outcomes. Evidence for generalization across settings and maintenance over time is also…

  9. Ca and Mg isotope constraints on the origin of Earth's deepest δ13 C excursion

    NASA Astrophysics Data System (ADS)

    Husson, Jon M.; Higgins, John A.; Maloof, Adam C.; Schoene, Blair

    2015-07-01

    Understanding the extreme carbon isotope excursions found in carbonate rocks of the Ediacaran Period (635-541 Ma), where δ13 C of marine carbonates (δ13 Ccarb) reach their minimum (- 12 ‰) for Earth history, is one of the most vexing problems in Precambrian geology. Known colloquially as the 'Shuram' excursion, the event has been interpreted by many as a product of a profoundly different Ediacaran carbon cycle. More recently, diagenetic processes have been invoked, with the very negative δ13 C values of Ediacaran carbonates explained via meteoric alteration, late-stage burial diagenesis or growth of authigenic carbonates in the sediment column, thus challenging models which rely upon a dramatically changing redox state of the Ediacaran oceans. Here we present 257 δ 44 / 40 Ca and 131 δ26 Mg measurements, along with [Mg], [Mn] and [Sr] data, from carbonates of the Ediacaran-aged Wonoka Formation (Fm.) of South Australia to bring new isotope systems to bear on understanding the 'Shuram' excursion. Data from four measured sections spanning the basin reveal stratigraphically coherent trends, with variability of ∼1.5‰ in δ26 Mg and ∼1.2‰ in δ 44 / 40 Ca. This Ca isotope variability dwarfs the 0.2-0.3 ‰ change seen coeval with the Permian-Triassic mass extinction, the largest recorded in the rock record, and is on par with putative changes in the δ 44 / 40 Ca value of seawater seen over the Phanerozoic Eon. Changes in both isotopic systems are too large to explain with changes in the isotopic composition of Ca and Mg in global seawater given modern budgets and residence times, and thus must be products of alternative processes. Relationships between δ 44 / 40 Ca and [Sr] and δ26 Mg and [Mg] are consistent with mineralogical control (e.g., aragonite vs. calcite, limestone vs. dolostone) on calcium and magnesium isotope variability. The most pristine samples in the Wonoka dataset, preserving Sr concentrations (in the 1000s of ppm range) and δ 44 / 40

  10. Seeking genericity in the selection of parameter sets: Impact on hydrological model efficiency

    NASA Astrophysics Data System (ADS)

    Andréassian, Vazken; Bourgin, François; Oudin, Ludovic; Mathevet, Thibault; Perrin, Charles; Lerat, Julien; Coron, Laurent; Berthet, Lionel

    2014-10-01

    This paper evaluates the use of a small number of generalist parameter sets as an alternative to classical calibration. Here parameter sets are considered generalist when they yield acceptable performance on a large number of catchments. We tested the genericity of an initial collection of 106 parameter sets sampled in the parameter space for the four-parameter GR4J rainfall-runoff model. A short list of 27 generalist parameter sets was obtained as a good compromise between model efficiency and length of the short list. A different data set was used for an independent evaluation of a calibration procedure, in which the search for an optimum parameter set is only allowed within this short list. In validation mode, the performance obtained is inferior to that of a classical calibration, but when the amount of data available for calibration is reduced, the generalist parameter sets become progressively more competitive, with better results for calibration series shorter than 1 year.

  11. A novel level set model with automated initialization and controlling parameters for medical image segmentation.

    PubMed

    Liu, Qingyi; Jiang, Mingyan; Bai, Peirui; Yang, Guang

    2016-03-01

    In this paper, a level set model without the need of generating initial contour and setting controlling parameters manually is proposed for medical image segmentation. The contribution of this paper is mainly manifested in three points. First, we propose a novel adaptive mean shift clustering method based on global image information to guide the evolution of level set. By simple threshold processing, the results of mean shift clustering can automatically and speedily generate an initial contour of level set evolution. Second, we devise several new functions to estimate the controlling parameters of the level set evolution based on the clustering results and image characteristics. Third, the reaction diffusion method is adopted to supersede the distance regularization term of RSF-level set model, which can improve the accuracy and speed of segmentation effectively with less manual intervention. Experimental results demonstrate the performance and efficiency of the proposed model for medical image segmentation. PMID:26748038

  12. Magnetic Excursion Recorded in Basalt at Newberry Volcano, Central Oregon

    NASA Astrophysics Data System (ADS)

    Champion, D. E.; Donnelly-Nolan, J. M.; Lanphere, M. A.; Ramsey, D. W.

    2004-12-01

    Paleomagnetic study of basalt flows on the north flank of Newberry Volcano has identified a major eruptive episode that occurred during a magnetic excursion. The measured direction of the basalt flows erupted during the excursion shallows from 81° to 76° inclination along a declination of ˜ 155° . The Virtual Geomagnetic Pole also shallows from 29° to 19° paleolatitude, along a paleolongitude of ˜ 250° , and is located off the west coast of Mexico. Geologic evidence combined with limited argon dating indicate that the basalt erupted from multiple sites about 80,000 years ago, probably during the time of anomalous magnetic directions recorded by ( ˜80 ka) ocean sediments in the Norwegian Sea and the North Atlantic. The westernmost flows erupted from spatter vents located a few km south of the city of Bend, and flowed north through lava tube(s) which form Stevens Cave, Horse Cave, and Redmond Cave among others. This western lobe flowed more than 50 km to the north, over NW-trending faults of the Tumalo Fault Zone that cut the adjacent and underlying basalt of Bend (40Ar/39Ar plateau age of 78±9 ka; isochron age of 77±19 ka); it is overlain by the basaltic andesite of Klawhop Butte (40Ar/39Ar plateau age of 39±6 ka). One sample of the transitional magnetic direction basalt has a K-Ar age of 77±40 ka; another sample has a 40Ar/39Ar plateau age of 92±25 ka and an isochron age of 73±24 ka. The eastern lobe erupted from vents at and near Lava Top Butte, located approximately 15 km SE of the western vents. These eastern lavas flowed through Arnold Cave and formed a broad ~10-12 km rootless shield known as the Badlands, the NE extent of which is about 30 km from Lava Top Butte. The west and east lobes each cover about 150 km2, and comprise an estimated volume of 3-5 km3. Newly acquired 10-meter DEM's and compilation of the mapping in ArcGIS will allow more precise calculation of the total area covered and the volume erupted. Chemical analyses of multiple

  13. A scoping study of water table excursions induced by seismic and volcanic events

    SciTech Connect

    Carrigan, C.R.; King, G.C.P.; Barr, G.E.

    1990-11-01

    We develop conservative models of water table response to displacements just beneath the water table simulating (1) shallow intrusion of a dike and (2) high level slip on a normal fault locked at the end. For matrix flow, we fine local water table excursions of under 10 m. in cases of isotropic permeability which includes dike inflation of 4 m and fault slips corresponding to earthquakes having a moment magnitude of 7.4. Even for enhancements of vertical permeability up to 10{sup 4}:1, excursions did not exceed 15 m which implies that pumping is strongly volume limited. We also present an analysis of upward directed flow in cracks for the case of earthquake induced pore pressure changes. For matrix properties characteristic of the Calico Hills (vitric) formation and a crack distribution bounding the potential flow capacity of published data, we estimate an upper bound of 0.25 cu m. of ground water per m. of fault length as the amount capable of being pumped to a level 250 m. above the normal water table. While the presence of even larger fractures than assumed might carry more ground water to that level an absolute upper limit of less than 50 cu. m. per m. of fault length is available to be pumped assuming a value n=0.46 for the rock porosity. For less porous rocks typical of the Topopah Spring or Tiva Canyon formations (n{approx}0.10) the upper limit may be reduced to less than 10 cu. m. per m. of fault length. This upper limit depends only upon strain, the height of pumping above the water table and the formation porosity.

  14. Assessing effects of variation in global climate data sets on spatial predictions from climate envelope models

    USGS Publications Warehouse

    Romanach, Stephanie; Watling, James I.; Fletcher, Robert J., Jr.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.

    2014-01-01

    Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.

  15. Two-phase electro-hydrodynamic flow modeling by a conservative level set model.

    PubMed

    Lin, Yuan

    2013-03-01

    The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. PMID:23161380

  16. Teacher Perceptions of the Role and Value of Excursions in Years 7-10 Geography Education in Victoria, Australia

    ERIC Educational Resources Information Center

    Munday, Penny

    2008-01-01

    Excursions are extremely important to the education of students in the geography curriculum. However, personal observations demonstrated a lack of readiness to conduct excursions in secondary schools. This apprehension of the teachers in this school to implement excursions in geography education was the basis for this study. The study addresses…

  17. Normative sciatic nerve excursion during a modified straight leg raise test.

    PubMed

    Ridehalgh, Colette; Moore, Ann; Hough, Alan

    2014-02-01

    Minimal data exists on how much sciatic nerve motion occurs during straight leg raise (SLR). The purpose of this study was to report preliminary normative ranges of sciatic nerve excursion using real time ultrasound during a modified SLR. The sciatic nerve was scanned in the posterior thigh in sixteen asymptomatic participants (age range 19-68 years). Nerve excursion was measured in transverse and longitudinal planes during knee extension from 90° to 0°, with the hip flexed to 30° and 60°. The ultrasound data was analysed off-line using cross correlation software. Results demonstrated that most nerves moved superficially during knee extension, a large proportion (10/16) moved laterally. Longitudinal excursion ranged from 6.4 to 14.7 mm (mean (SD) 9.92 mm (2.2)) in 30° hip flexion, and 5.1-20.2 mm (mean (SD) 12.4 mm (4.4)) in 60° hip flexion. Mean nerve excursion was significantly greater in 60° hip flexion (p = 0.02). There is a large between-subject variation in sciatic nerve excursion during this modified SLR in asymptomatic subjects. Mean nerve excursion was found to be higher with the hip pre-positioned in greater flexion, suggesting that pre-loading may not consistently reduce excursion. PMID:24034944

  18. Two Possible but Unconfirmed Paleomagnetic Excursions in Pleistocene Lacustrine Sediments in North America and Mexico

    NASA Astrophysics Data System (ADS)

    Liddicoat, J. C.

    2011-12-01

    The paleomagnetic literature is replete with reports of investigations of continuous field behavior (secular variation) using volcanic rocks and lacustrine and marine sediments from around the globe, and some of the reports are about paleomagnetic excursions. The Laschamp Excursion in volcanic rocks in the Massif Central of France (Bonhommet and Babkine, 1967) and Mono Lake Excursion in exposed lacustrine sediment in California (Denham and Cox, 1971) are two excursions that are recognized. At other localities and for different time intervals, confirmation of some excursions in sediment has not been successful at nearby sites where the deposits are believed to be the same age. An example is in the Basin of Mexico at Tlapacoya (19.4 N, 261.2 E)(Liddicoat et al., 1979). Another excursion that has not been confirmed but might have occurred is recorded in Pleistocene Lake Bonneville sediments that are exposed in the bank of the Sevier River near Delta, Utah (39.4 N, 247.6 E). In Mexico and Utah, the excursion is in a single, fully-oriented hand sample that was prepared into consecutive, 2-cm-thick horizons. Each horizon contains six samples that were demagnetized in an alternating field, and for each Lake Bonneville horizon the scatter of directions is 4 degrees or less. Several possibilities for why the excursion at Tlapacoya could not be confirmed were presented (Liddicoat et al., 1979), leaving open the possibility that the excursion might have occurred in Mexico about 14,500 years ago. The field behavior in Utah where the sediments are older than those at Tlapacoya by several tens of thousands of years (Oviatt et al., 1994) is nearly identical to the behavior recorded at Tlapacoya. At both localities, a path of the Virtual Geomagnetic Poles during the excursion is confined to a narrow meridional zone centered at about 200 E longitude and that descends to about 15 N latitude. The difficulty of confirming the anomalous field behavior at Tlapacoya and in Utah

  19. Two Possible but Unconfirmed Palaeomagnetic Excursions in Pleistocene Lacustrine Sediments in North America and Mexico

    NASA Astrophysics Data System (ADS)

    Liddicoat, J.; Coe, R.; Oviatt, J.

    2012-04-01

    The palaeomagnetic literature is replete with reports of investigations of continuous field behaviour (secular variation) using volcanic rocks and lacustrine and marine sediments, and some of the reports are about palaeomagnetic excursions. The Laschamp Excursion in volcanic rocks in the Massif Central of France (Bonhommet and Babkine, 1967) and Mono Lake Excursion in exposed lacustrine sediment in California (Denham and Cox, 1971) are two excursions that generally are accepted as having occurred in the late Pleistocene. At other localities and for different time intervals, confirmation of some excursions in sediment has not been successful at nearby sites where the deposits are believed to be the same age. An example is in the Basin of Mexico at Tlapacoya (19.4˚ N, 261.2˚ E) in lacustrine sediments about 14,500 years old (Liddicoat et al., 1979). Another excursion that has not been confirmed but might have occurred is recorded in Lake Bonneville sediments that are exposed in the bank of the Sevier River near Delta, Utah (39.4˚ N, 247.6˚ E). In Mexico and Utah, the excursions are in a single, fully-oriented hand sample that was prepared into consecutive, 2-cm-thick horizons, each consisting of six subsamples. The subsamples were demagnetized in an alternating field to at least 60 mT or when possible because of consolidation, thermally demagnetized to 600˚ C; for each Lake Bonneville horizon, the scatter of palaeomagnetic directions is 4˚ or less. Several possibilities for why the excursion at Tlapacoya could not be confirmed were presented (Liddicoat et al., 1979), leaving open the possibility that the excursion might have occurred in Mexico. The field behaviour in Utah where the sediments are older than those at Tlapacoya by several tens of thousands of years (Oviatt et al., 1994) is nearly identical to the behaviour recorded at Tlapacoya. At both localities, a path of the Virtual Geomagnetic Poles during the excursion is confined to a narrow meridional zone

  20. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance

    PubMed Central

    Raymond, G. M.; Bassingthwaighte, J. B.

    2016-01-01

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a “consilience” of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (Km = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave Km = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated Km = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  1. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    EPA Science Inventory

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  2. Applicability domains for classification problems: benchmarking of distance to models for AMES mutagenicity set

    EPA Science Inventory

    For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...

  3. Playing with a Concept: Teaching Job Characteristics Model with a Tinkertoy[R] Builder Set

    ERIC Educational Resources Information Center

    Smrt, Diana Lazarova; Nelson, Reed Elliot

    2013-01-01

    Using a toy construction set, we introduce to students the job characteristics model in a fun and engaging way. The activity not only describes the theoretical variables of the model but also allows students to (a) experience the dynamic interaction among these variables and (b) gain a better, hands-on understanding of the model. The exercise…

  4. A Model Process for Institutional Goals-Setting. A Module of the Needs Assessment Project.

    ERIC Educational Resources Information Center

    King, Maxwell C.; And Others

    A goals-setting model for the community/junior college that would interface with the community needs assessment model was developed, using as the survey instrument the Institutional Goals Inventory (I.G.I.) developed by the Educational Testing Service. The nine steps in the model are: Establish Committee on College Goals and Identify Goals Project…

  5. Dating the Laschamp Excursion: Why Speleothems are Valuable Tools for Constraining the Timing and Duration of Short-Lived Geomagnetic Events

    NASA Astrophysics Data System (ADS)

    Lascu, I.; Feinberg, J. M.; Dorale, J. A.; Cheng, H.; Edwards, R. L.

    2015-12-01

    Short-lived geomagnetic events are reflections of geodynamo behavior at small length scales. A rigorous documentation of the anatomy, timing, duration, and frequency of centennial-to-millennial scale geomagnetic events can be invaluable for theoretical and numerical geodynamo models, and for the understanding the finer dynamics of the Earth's core. A critical ingredient for characterizing such geomagnetic instabilities are tightly constrained age models that enable high-resolution magnetostratigraphies. Here we focus on a North American speleothem geomagnetic record of the Laschamp excursion, which was the first geomagnetic excursion recognized and described in the paleomagnetic record, and remains the most studied event of its kind. The geological significance of the Laschamp lies chiefly in the fact that it constitutes a global time-synchronous geochronological marker. The Laschamp excursion occurred around the time of the demise of Homo neanderthalensis, in conjunction with high-amplitude, rapid climatic oscillations leading into the Last Glacial Maximum, and precedes a major supervolcano eruption in the Mediterranean. Thus, the precise determination of the timing and duration of the Laschamp would help in elucidating major scientific questions situated at the intersection of geology, paleoclimatology, and anthropology. Here we present a geomagnetic record from a stalagmite collected in Crevice Cave, Missouri, which we have dated using a combination of high-precision 230Th ages and annual layer counting using confocal microscopy. We have found a maximum duration for the Laschamp that spans the interval 42,250-39,700 years BP, and an age of 41,100 ± 350 years BP for the height of the excursion. During this period relative paleointensity decreased by an order of magnitude and the virtual geomagnetic pole was located at southerly latitudes. Our chronology provides the first robust bracketing for the Laschamp excursion, and improves on previous age determinations

  6. Be a Healthy Role Model for Children: 10 Tips for Setting Good Examples

    MedlinePlus

    ... Series be a healthy role model for children 10 tips for setting good examples You are the most important influence on your child. You can do many things to help your children develop healthy eating habits ...

  7. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures. PMID:26221669

  8. A global deglacial negative carbon isotope excursion in speleothem calcite

    NASA Astrophysics Data System (ADS)

    Breecker, D.

    2015-12-01

    δ13C values of speleothem calcite decreased globally during the last deglaciation defining a carbon isotope excursion (CIE) despite relatively constant δ13C values of carbon in the ocean-atmosphere system. The magnitude of the CIE varied with latitude, increasing poleward from ~2‰ in the tropics to as much as 7‰ at high latitudes. This recent CIE provides an interesting comparison with CIEs observed in deep time. A substantial portion of this CIE can be explained by the increase in atmospheric pCO2 that accompanied deglaciation. The dependence of C3 plant δ13C values on atmospheric pCO2 predicts a 2‰ δ13C decrease driven by the deglacial pCO2 increase. I propose that this signal was transferred to caves and thus explains nearly 100% of the CIE magnitude observed in the tropics and no less than 30% at the highest latitudes in the compilation. An atmospheric pCO2 control on speleothem δ13C values, if real, will need to be corrected for using ice core data before δ13C records can be interpreted in a paleoclimate context. The decrease in the magnitude of the equilibrium calcite-CO2 carbon isotope fractionation factor explains a maximum of 1‰ of the CIE at the highest northern latitude in the compilation, which experienced the largest deglacial warming. Much of the residual extratropical CIE was likely driven by increasing belowground respiration rates, which were presumably pronounced at high latitudes as glacial retreat exposed fresh surfaces and/or vegetation density increased. The largest increases in belowground respiration would have therefore occurred at the highest latitudes, explaining the meridional trend. This work supports the notion that increases in atmospheric pCO2 and belowground respiration rates can result in large CIEs recorded in terrestrial carbonates, which, as previously suggested, may explain the magnitude of the PETM CIE as recorded by paleosol carbonates.

  9. Robust autonomous model learning from 2D and 3D data sets.

    PubMed

    Langs, Georg; Donner, René; Peloschek, Philipp; Bischof, Horst

    2007-01-01

    In this paper we propose a weakly supervised learning algorithm for appearance models based on the minimum description length (MDL) principle. From a set of training images or volumes depicting examples of an anatomical structure, correspondences for a set of landmarks are established by group-wise registration. The approach does not require any annotation. In contrast to existing methods no assumptions about the topology of the data are made, and the topology can change throughout the data set. Instead of a continuous representation of the volumes or images, only sparse finite sets of interest points are used to represent the examples during optimization. This enables the algorithm to efficiently use distinctive points, and to handle texture variations robustly. In contrast to standard elasticity based deformation constraints the MDL criterion accounts for systematic deformations typical for training sets stemming from medical image data. Experimental results are reported for five different 2D and 3D data sets. PMID:18051152

  10. A Science and Technology Excursion-based Unit of Work: The Human Body.

    ERIC Educational Resources Information Center

    Terry, Laura

    2000-01-01

    Presents a unit of work based on a few systems of the human body. Stretches students' learning beyond the classroom into the local community by going on an excursion to Kalgoorlie Regional Hospital. (ASK)

  11. Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets

    PubMed Central

    Valenzuela, Loreto M.; Knight, Doyle D.; Kohn, Joachim

    2016-01-01

    Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R2 > 0.78 for test set) but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error). Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R2 = 0.78 for test set), with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria. PMID:27200091

  12. Late Ordovician (Turinian-Chatfieldian) carbon isotope excursions and their stratigraphic and paleoceanographic significance

    USGS Publications Warehouse

    Ludvigson, Greg A.; Witzke, B.J.; Gonzalez, Luis A.; Carpenter, S.J.; Schneider, C.L.; Hasiuk, F.

    2004-01-01

    Five positive carbon isotope excursions are reported from Platteville-Decorah strata in the Upper Mississippi Valley. All occur in subtidal carbonate strata, and are recognized in the Mifflin, Grand Detour, Quimbys Mill, Spechts Ferry, and Guttenberg intervals. The positive carbon isotope excursions are developed in a Platteville-Decorah succession in which background ??13C values increase upward from about -2??? at the base to about 0??? Vienna Pee Dee belemnite (VPDB) at the top. A regional north-south ??13C gradient, with lighter values to the north and heavier values to the south is also noted. Peak excursion ??13C values of up to +2.75 are reported from the Quimbys Mill excursion, and up to +2.6 from the Guttenberg excursion, although there are considerable local changes in the magnitudes of these events. The Quimbys Mill, Spechts Ferry, and Guttenberg carbon isotope excursions occur in units that are bounded by submarine disconformities, and completely starve out in deeper, more offshore areas. Closely spaced chemostratigraphic profiles of these sculpted, pyrite-impregnated hardground surfaces show that they are associated with very abrupt centimeter-scale negative ??13C shifts of up to several per mil, possibly resulting from the local diagenetic effects of incursions of euxinic bottom waters during marine flooding events. ?? 2004 Elsevier B.V. All rights reserved.

  13. Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments

    NASA Astrophysics Data System (ADS)

    Lane, Peter C. R.; Gobet, Fernand

    2013-03-01

    Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.

  14. Using an Agenda Setting Model to Help Students Develop & Exercise Participatory Skills and Values

    ERIC Educational Resources Information Center

    Perry, Anthony D.; Wilkenfeld, Britt S.

    2006-01-01

    The Agenda Setting Model is a program component that can be used in courses to contribute to students' development as responsible, effective, and informed citizens. This model involves students in finding a unified voice to assert an agenda of issues that they find especially pressing. This is often the only time students experience such a…

  15. Addressing HIV in the School Setting: Application of a School Change Model

    ERIC Educational Resources Information Center

    Walsh, Audra St. John; Chenneville, Tiffany

    2013-01-01

    This paper describes best practices for responding to youth with human immunodeficiency virus (HIV) in the school setting through the application of a school change model designed by the World Health Organization. This model applies a whole school approach and includes four levels that span the continuum from universal prevention to direct…

  16. An Example of the Use of Fuzzy Set Concepts in Modeling Learning Disability.

    ERIC Educational Resources Information Center

    Horvath, Michael J.; And Others

    1980-01-01

    The way a particular clinician judges, from data, the degree to which a child is in the category "learning disabled" was modeled on the basis of clinician's statement of the traits that comprise the handicap. The model illustrates the use of fuzzy set theory. (Author/RL)

  17. Information Content in Data Sets for a Nucleated-Polymerization Model

    PubMed Central

    Banks, H.T.; Doumic, Marie; Kruse, Carola; Prigent, Stephanie; Rezaei, Human

    2015-01-01

    We illustrate the use of statistical tools (asymptotic theories of standard error quantification using appropriate statistical models, bootstrapping, model comparison techniques) in addition to sensitivity analysis that may be employed to determine the information content in data sets. We do this in the context of recent models [25] for nucleated polymerization in proteins, about which very little is known regarding the underlying mechanisms; thus the methodology we develop here may be of great help to experimentalists. We conclude that the investigated data sets will support with reasonable levels of uncertainty only the estimation of the parameters related to the early steps of the aggregation process. PMID:26046598

  18. Large field excursions and approximate discrete symmetries from a clockwork axion

    NASA Astrophysics Data System (ADS)

    Kaplan, David E.; Rattazzi, Riccardo

    2016-04-01

    We present a renormalizable theory of scalars in which the low-energy effective theory contains a pseudo-Goldstone boson with a compact field space of 2 π F and an approximate discrete shift symmetry ZQ with Q ≫1 , yet the number of fields in the theory goes as log Q . Such a model can serve as a UV completion to models of relaxions and is a new source of exponential scale separation in field theory. While the model is local in "theory space," it appears not to have a continuum generalization (i.e., it cannot be a deconstructed extra dimension). Our framework shows that super-Planckian field excursions can be mimicked while sticking to renormalizable four-dimensional quantum field theory. We show that a supersymmetric extension is straightforwardly obtained, and we illustrate possible UV completions based on a compact extra dimension, where all global symmetries arise accidentally as a consequence of gauge invariance and five-dimensional locality.

  19. Enabling Interoperation of High Performance, Scientific Computing Applications: Modeling Scientific Data with the Sets & Fields (SAF) Modeling System

    SciTech Connect

    Miller, M C; Reus, J F; Matzke, R P; Arrighi, W J; Schoof, L A; Hitt, R T; Espen, P K; Butler, D M

    2001-02-07

    This paper describes the Sets and Fields (SAF) scientific data modeling system. It is a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math-oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or meshes to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshes. SAF addresses this problem by providing a small set of mathematical building blocks--sets, relations and fields--out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. A short historical perspective, a conceptual model and an overview of SAF along with preliminary results from its use in a few ASCI codes are discussed.

  20. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses.

    PubMed

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  1. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    PubMed Central

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  2. Variational and Shape Prior-based Level Set Model for Image Segmentation

    SciTech Connect

    Diop, El Hadji S.; Jerbi, Taha; Burdin, Valerie

    2010-09-30

    A new image segmentation model based on level sets approach is presented herein. We deal with radiographic medical images where boundaries are not salient, and objects of interest have the same gray level as other structures in the image. Thus, an a priori information about the shape we look for is integrated in the level set evolution for good segmentation results. The proposed model also accounts a penalization term that forces the level set to be close to a signed distance function (SDF), which then avoids the re-initialization procedure. In addition, a variant and complete Mumford-Shah model is used in our functional; the added Hausdorff measure helps to better handle zones where boundaries are occluded or not salient. Finally, a weighted area term is added to the functional to make the level set drive rapidly to object's boundaries. The segmentation model is formulated in a variational framework, which, thanks to calculus of variations, yields to partial differential equations (PDEs) to guide the level set evolution. Results obtained on both synthetic and digital radiographs reconstruction (DRR) show that the proposed model improves on existing prior and non-prior shape based image segmentation.

  3. Test data sets for calibration of stochastic and fractional stochastic volatility models.

    PubMed

    Pospíšil, Jan; Sobotka, Tomáš

    2016-09-01

    Data for calibration and out-of-sample error testing of option pricing models are provided alongside data obtained from optimization procedures in "On calibration of stochastic and fractional stochastic volatility models" [1]. Firstly we describe testing data sets, further calibration data obtained from combined optimizers is visually depicted - interactive 3d bar plots are provided. The data is suitable for a further comparison of other optimization routines and also to benchmark different pricing models. PMID:27419200

  4. The value of multiple data set calibration versus model complexity for improving the performance of hydrological models in mountain catchments

    NASA Astrophysics Data System (ADS)

    Finger, David; Vis, Marc; Huss, Matthias; Seibert, Jan

    2015-04-01

    The assessment of snow, glacier, and rainfall runoff contribution to discharge in mountain streams is of major importance for an adequate water resource management. Such contributions can be estimated via hydrological models, provided that the modeling adequately accounts for snow and glacier melt, as well as rainfall runoff. We present a multiple data set calibration approach to estimate runoff composition using hydrological models with three levels of complexity. For this purpose, the code of the conceptual runoff model HBV-light was enhanced to allow calibration and validation of simulations against glacier mass balances, satellite-derived snow cover area and measured discharge. Three levels of complexity of the model were applied to glacierized catchments in Switzerland, ranging from 39 to 103 km2. The results indicate that all three observational data sets are reproduced adequately by the model, allowing an accurate estimation of the runoff composition in the three mountain streams. However, calibration against only runoff leads to unrealistic snow and glacier melt rates. Based on these results, we recommend using all three observational data sets in order to constrain model parameters and compute snow, glacier, and rain contributions. Finally, based on the comparison of model performance of different complexities, we postulate that the availability and use of different data sets to calibrate hydrological models might be more important than model complexity to achieve realistic estimations of runoff composition.

  5. Solving the set cover problem and the problem of exact cover by 3-sets in the Adleman-Lipton model.

    PubMed

    Chang, Weng Long; Guo, Minyi

    2003-12-01

    Adleman wrote the first paper in which it is shown that deoxyribonucleic acid (DNA) strands could be employed towards calculating solutions to an instance of the NP-complete Hamiltonian path problem (HPP). Lipton also demonstrated that Adleman's techniques could be used to solve the NP-complete satisfiability (SAT) problem (the first NP-complete problem). In this paper, it is proved how the DNA operations presented by Adleman and Lipton can be used for developing DNA algorithms to resolving the set cover problem and the problem of exact cover by 3-sets. PMID:14643494

  6. Adaptive dynamic networks as models for the immune system and autocatalytic sets

    SciTech Connect

    Farmer, J.D.; Kauffman, S.A.; Packard, N.H.; Perelson, A.S.

    1986-04-01

    A general class of network models is described that can be used to present complex adaptive systems. These models have two purposes: On a practical level they are closely based on real biological phenomena, and are intended to model detailed aspects of them. On a more general level, however, they provide a framework to address broader questions concerning evolution, pattern recognition, and other properties of living systems. This paper concentrates on the more general level, illustrating the basic concepts with two examples, a model of the immune system and a model for the spontaneous emergence of autocatalytic sets in a chemically reactive polymer soup. 10 refs., 3 figs.

  7. THE DEVELOPMENT AND USE OF A MODEL TO PREDICT SUSTAINABILITY OF CHANGE IN HEALTH CARE SETTINGS

    PubMed Central

    2011-01-01

    Innovations adopted through organizational change initiatives are often not sustained leading to diminished quality, productivity, and consumer satisfaction. Research explaining variance in the use of adopted innovations in health care settings is sparse, suggesting the need for a theoretical model to guide research and practice. In this article, we describe the development of a hybrid conjoint decision theoretic model designed to predict the sustainability of organizational change in health care settings. An initial test of the model’s predictive validity using expert scored hypothetic profiles resulted in an r-squared value of .77. The test of this model offers a theoretical base for future research on the sustainability of change in health care settings. PMID:22262947

  8. Using open building data in the development of exposure data sets for catastrophe risk modelling

    NASA Astrophysics Data System (ADS)

    Figueiredo, R.; Martina, M.

    2016-02-01

    One of the necessary components to perform catastrophe risk modelling is information on the buildings at risk, such as their spatial location, geometry, height, occupancy type and other characteristics. This is commonly referred to as the exposure model or data set. When modelling large areas, developing exposure data sets with the relevant information about every individual building is not practicable. Thus, census data at coarse spatial resolutions are often used as the starting point for the creation of such data sets, after which disaggregation to finer resolutions is carried out using different methods, based on proxies such as the population distribution. While these methods can produce acceptable results, they cannot be considered ideal. Nowadays, the availability of open data is increasing and it is possible to obtain information about buildings for some regions. Although this type of information is usually limited and, therefore, insufficient to generate an exposure data set, it can still be very useful in its elaboration. In this paper, we focus on how open building data can be used to develop a gridded exposure model by disaggregating existing census data at coarser resolutions. Furthermore, we analyse how the selection of the level of spatial resolution can impact the accuracy and precision of the model, and compare the results in terms of affected residential building areas, due to a flood event, between different models.

  9. Low-shrinkage dental restorative composite: modeling viscoelastic behavior during setting.

    PubMed

    Dauvillier, Bibi S; Feilzer, Albert J

    2005-04-01

    Much attention has been directed toward developing dental direct restorative composites that generate less shrinkage stress during setting. The aim of this study was to explore the viscoelastic behavior of a new class of low-shrinkage dental restorative composite during setting. The setting behavior of an experimental oxirane composite has been investigated by analyzing stress-strain data with two-parametric mechanical models. Experimental data were obtained from a dynamic test method, in which the setting light-activated composite was continuously subjected to sinusoidal strain cycles. The material parameters and the model's predictive capacity were analyzed with validated modeling procedures. The light-activated oxirane composite exhibited shrinkage delay and low polymerization shrinkage strain and stresses when compared with conventional light-activated composite. Noise in the stress data restricted the predictive ability of the Maxwell model to the elastic modulus development of the composite only. Evaluation tests of their potential as restorative material are required, to examine if the biocompatibility and mechanical properties after setting of oxirane composites are acceptable for dental use. PMID:15685614

  10. Neural model for learning-to-learn of novel task sets in the motor domain

    PubMed Central

    Pitti, Alexandre; Braud, Raphaël; Mahé, Sylvain; Quoy, Mathias; Gaussier, Philippe

    2013-01-01

    During development, infants learn to differentiate their motor behaviors relative to various contexts by exploring and identifying the correct structures of causes and effects that they can perform; these structures of actions are called task sets or internal models. The ability to detect the structure of new actions, to learn them and to select on the fly the proper one given the current task set is one great leap in infants cognition. This behavior is an important component of the child's ability of learning-to-learn, a mechanism akin to the one of intrinsic motivation that is argued to drive cognitive development. Accordingly, we propose to model a dual system based on (1) the learning of new task sets and on (2) their evaluation relative to their uncertainty and prediction error. The architecture is designed as a two-level-based neural system for context-dependent behavior (the first system) and task exploration and exploitation (the second system). In our model, the task sets are learned separately by reinforcement learning in the first network after their evaluation and selection in the second one. We perform two different experimental setups to show the sensorimotor mapping and switching between tasks, a first one in a neural simulation for modeling cognitive tasks and a second one with an arm-robot for motor task learning and switching. We show that the interplay of several intrinsic mechanisms drive the rapid formation of the neural populations with respect to novel task sets. PMID:24155736

  11. Adapting Existing Spatial Data Sets to New Uses: An Example from Energy Modeling

    SciTech Connect

    Johanesson, G; Stewart, J S; Barr, C; Sabeff, L B; George, R; Heimiller, D; Milbrandt, A

    2006-06-23

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, and economic projections. These data are available at various spatial and temporal scales, which may be different from those needed by the energy modeling community. If the translation from the original format to the format required by the energy researcher is incorrect, then resulting models can produce misleading conclusions. This is of increasing importance, because of the fine resolution data required by models for new alternative energy sources such as wind and distributed generation. This paper addresses the matter by applying spatial statistical techniques which improve the usefulness of spatial data sets (maps) that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) imputing missing data and (3) merging spatial data sets.

  12. Engaging students in research learning experiences through hydrology field excursions and projects

    NASA Astrophysics Data System (ADS)

    Ewen, T.; Seibert, J.

    2014-12-01

    One of the best ways to engage students and instill enthusiasm for hydrology is to expose them to hands-on learning. A focus on hydrology field research can be used to develop context-rich and active learning, and help solidify idealized learning where students are introduced to individual processes through textbook examples, often neglecting process interactions and an appreciation for the complexity of the system. We introduced a field course where hydrological measurement techniques are used to study processes such as snow hydrology and runoff generation, while also introducing students to field research and design of their own field project. In the field projects, students design a low-budget experiment with the aim of going through the different steps of a 'real' scientific project, from formulating the research question to presenting their results. In one of the field excursions, students make discharge measurements in several alpine streams with a salt tracer to better understand the spatial characteristics of an alpine catchment, where source waters originate and how they contribute to runoff generation. Soil moisture measurements taken by students in this field excursion were used to analyze spatial soil moisture patterns in the alpine catchment and subsequently used in a publication. Another field excursion repeats a published experiment, where preferential soil flow paths are studied using a tracer and compared to previously collected data. For each field excursion, observational data collected by the students is uploaded to an online database we developed, which also allows students to retrieve data from past excursions to further analyze and compare their data. At each of the field sites, weather stations were installed and a webviewer allows access to realtime data from data loggers, allowing students to explore how processes relate to climatic conditions. With in-house film expertise, these field excursions were also filmed and short virtual

  13. Influence of turbulence model parameter settings on conjugate heat transfer simulation

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Li, Yu; Zou, Zhengping; Wang, Lei; Song, Songhe

    2014-04-01

    Rationality of the parameter settings in turbulence model is an important factor affecting the accuracy of conjugate heat transfer (CHT) prediction. On the basis of a developed CHT methodology and the experimental data of Mark// cooling turbine blade, influences of the turbulence model parameter settings and the selection of turbulence models on CHT simulation are investigated. Results and comparisons with experimental data indicate that the inlet setting of the in Spalart-Allmaras model has nearly no influence on flow and heat transfer in blade surface. The inlet turbulence length scale l T in the low-Reynolds number Chien k- ɛ turbulence model and the blade surface roughness in shear stress transport (SST) k- ω SST model have relatively obvious effects on the blade surface temperature which increases with the increase of them. Both of the laminar Prandtl number and turbulent Prandtl number have slight influences on the prediction, and they only need to be constant in CHT simulation. The k- ω SST model has the best accuracy in the turbine blade CHT simulation compared with the other two models.

  14. Development of optimization models for the set behavior and compressive strength of sodium activated geopolymer pastes

    NASA Astrophysics Data System (ADS)

    Fillenwarth, Brian Albert

    As large countries such as China begin to industrialize and concerns about global warming continue to grow, there is an increasing need for more environmentally friendly building materials. One promising material known as a geopolymer can be used as a portland cement replacement and in this capacity emits around 67% less carbon dioxide. In addition to potentially reducing carbon emissions, geopolymers can be synthesized with many industrial waste products such as fly ash. Although the benefits of geopolymers are substantial, there are a few difficulties with designing geopolymer mixes which have hindered widespread commercialization of the material. One such difficulty is the high variability of the materials used for their synthesis. In addition to this, interrelationships between mix design variables and how these interrelationships impact the set behavior and compressive strength are not well understood. A third complicating factor with designing geopolymer mixes is that the role of calcium in these systems is not well understood. In order to overcome these barriers, this study developed predictive optimization models through the use of genetic programming with experimentally collected set times and compressive strengths of several geopolymer paste mixes. The developed set behavior models were shown to predict the correct set behavior from the mix design over 85% of the time. The strength optimization model was shown to be capable of predicting compressive strengths of geopolymer pastes from their mix design to within about 1 ksi of their actual strength. In addition to this the optimization models give valuable insight into the key factors influencing strength development as well as the key factors responsible for flash set and long set behaviors in geopolymer pastes. A method for designing geopolymer paste mixes was developed from the generated optimization models. This design method provides an invaluable tool for use in future geopolymer research as well as

  15. The identifiability analysis for setting up measuring campaigns in integrated water quality modelling

    NASA Astrophysics Data System (ADS)

    Freni, Gabriele; Mannina, Giorgio

    Identifiability analysis enables the quantification of the number of model parameters that can be assessed by calibration with respect to a data set. Such a methodology is based on the appraisal of sensitivity coefficients of the model parameters by means of Monte Carlo runs. By employing the Fisher Information Matrix, the methodology enables one to gain insights with respect to the number of model parameters that can be reliably assessed. The paper presents a study where identifiability analysis is used as a tool for setting up measuring campaigns for integrated water quality modelling. Particularly, by means of the identifiability analysis, the information about the location and the number of the monitoring stations in the integrated system required for assessing a specific group of model parameters were gained. The analysis has been applied to a real, partially urbanised, catchment containing two sewer systems, two wastewater treatment plants and a river. Several scenarios of measuring campaigns have been considered; each scenario was characterised by different monitoring station locations for the gathering of quantity and quality data. The results enabled us to assess the maximum number of model parameters quantifiable for each scenario i.e. for each data set. The methodology resulted to be a powerful tool for designing measuring campaign for integrated water quality modelling. Indeed, the crucial cross sections throughout the integrated wastewater system were detected optimizing both human and economic efforts in the gathering of field data. Further, a connection between the data set and the number of model parameters effectively assessable has been established leading to much more reliable model results.

  16. An algorithm for deriving core magnetic field models from the Swarm data set

    NASA Astrophysics Data System (ADS)

    Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko

    2013-11-01

    In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.

  17. Noisy attractors and ergodic sets in models of gene regulatory networks.

    PubMed

    Ribeiro, Andre S; Kauffman, Stuart A

    2007-08-21

    We investigate the hypothesis that cell types are attractors. This hypothesis was criticized with the fact that real gene networks are noisy systems and, thus, do not have attractors [Kadanoff, L., Coppersmith, S., Aldana, M., 2002. Boolean Dynamics with Random Couplings. http://www.citebase.org/abstract?id=oai:arXiv.org:nlin/0204062]. Given the concept of "ergodic set" as a set of states from which the system, once entering, does not leave when subject to internal noise, first, using the Boolean network model, we show that if all nodes of states on attractors are subject to internal state change with a probability p due to noise, multiple ergodic sets are very unlikely. Thereafter, we show that if a fraction of those nodes are "locked" (not subject to state fluctuations caused by internal noise), multiple ergodic sets emerge. Finally, we present an example of a gene network, modelled with a realistic model of transcription and translation and gene-gene interaction, driven by a stochastic simulation algorithm with multiple time-delayed reactions, which has internal noise and that we also subject to external perturbations. We show that, in this case, two distinct ergodic sets exist and are stable within a wide range of parameters variations and, to some extent, to external perturbations. PMID:17543998

  18. The GRENE-TEA model intercomparison project (GTMIP) Stage 1 forcing data set

    NASA Astrophysics Data System (ADS)

    Sueyoshi, T.; Saito, K.; Miyazaki, S.; Mori, J.; Ise, T.; Arakida, H.; Suzuki, R.; Sato, A.; Iijima, Y.; Yabuki, H.; Ikawa, H.; Ohta, T.; Kotani, A.; Hajima, T.; Sato, H.; Yamazaki, T.; Sugimoto, A.

    2016-01-01

    Here, the authors describe the construction of a forcing data set for land surface models (including both physical and biogeochemical models; LSMs) with eight meteorological variables for the 35-year period from 1979 to 2013. The data set is intended for use in a model intercomparison study, called GTMIP, which is a part of the Japanese-funded Arctic Climate Change Research Project. In order to prepare a set of site-fitted forcing data for LSMs with realistic yet continuous entries (i.e. without missing data), four observational sites across the pan-Arctic region (Fairbanks, Tiksi, Yakutsk, and Kevo) were selected to construct a blended data set using both global reanalysis and observational data. Marked improvements were found in the diurnal cycles of surface air temperature and humidity, wind speed, and precipitation. The data sets and participation in GTMIP are open to the scientific community (doi:10.17592/001.2015093001).

  19. CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same

  20. Applying the SNOMED CT Concept Model to Represent Value Sets for Head and Neck Cancer Documentation.

    PubMed

    Højen, Anne Randorff; Brønnum, Dorthe; Gøeg, Kirstine Rosenbeck; Elberg, Pia Britt

    2016-01-01

    This paper presents an analysis of the extent to which SNOMED CT is suitable for representing data within the domain of head and neck cancer. In this analysis we assess whether the concept model of SNOMED CT comply with the documentation needed within this clinical domain. Attributes from the follow-up template of the clinical quality registry for Danish Head and Neck Cancer, and their respective value sets were mapped to SNOMED CT using existing mapping guidelines. Results show that post-coordination is important to represent specific types of value sets, such as absence of findings and severities. The concept model of SNOMED CT was found suitable for representing the value sets of this material. We argue for the development of further mapping guidelines for consistent post-coordination and for initiatives that demonstrate use of this important terminological feature in actual SNOMED CT implementations. PMID:27577420

  1. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    NASA Astrophysics Data System (ADS)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  2. Planting-Density Optimization Study Fortomato Fruit Set and Yield Based Onfunctional-Structural Model Greenlab

    NASA Astrophysics Data System (ADS)

    Yang, Lili; Wang, Yiming; Dong, Qiaoxue

    Quantification of tomato's fruit-sets depends on the level of competition for assimilate in different environment, and this paper presented some results of fruit yield and quality (fruit size) in response to environment (mainly respect to and planting-density and light). Some experiments had been carried out to find the relationship between growth rules of tomato and plant densities A structural-functional model GREENLAB has been developed to simulate it. The results show that increasing plant density results in an increment of biomass production on a ground area but in a reduction of single plant fresh weight. To find rules between organ sink and source relationship, calibrations Environmental conditions were introduced into the model checking the influence on Q/D over plant growth period and fruit set ratio. It is found that changing the Q/D ratio in some critical periods can be used to optimize fruit set and yield of greenhouse tomato.

  3. Data sets for snow cover monitoring and modelling from the National Snow and Ice Data Center

    NASA Astrophysics Data System (ADS)

    Holm, M.; Daniels, K.; Scott, D.; McLean, B.; Weaver, R.

    2003-04-01

    A wide range of snow cover monitoring and modelling data sets are pending or are currently available from the National Snow and Ice Data Center (NSIDC). In-situ observations support validation experiments that enhance the accuracy of remote sensing data. In addition, remote sensing data are available in near-real time, providing coarse-resolution snow monitoring capability. Time series data beginning in 1966 are valuable for modelling efforts. NSIDC holdings include SMMR and SSM/I snow cover data, MODIS snow cover extent products, in-situ and satellite data collected for NASA's recent Cold Land Processes Experiment, and soon-to-be-released ASMR-E passive microwave products. The AMSR-E and MODIS sensors are part of NASA's Earth Observing System flying on the Terra and Aqua satellites Characteristics of these NSIDC-held data sets, appropriateness of products for specific applications, and data set access and availability will be presented.

  4. Habitat Demonstration Unit (HDU) Pressurized Excursion Module (PEM) Systems Integration Strategy

    NASA Technical Reports Server (NTRS)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tri, Terry; Toups, Larry; Howe, A. Scott

    2011-01-01

    The Habitat Demonstration Unit (HDU) project team constructed an analog prototype lunar surface laboratory called the Pressurized Excursion Module (PEM). The prototype unit subsystems were integrated in a short amount of time, utilizing a rapid prototyping approach that brought together over 20 habitation-related technologies from a variety of NASA centers. This paper describes the system integration strategies and lessons learned, that allowed the PEM to be brought from paper design to working field prototype using a multi-center team. The system integration process was based on a rapid prototyping approach. Tailored design review and test and integration processes facilitated that approach. The use of collaboration tools including electronic tools as well as documentation enabled a geographically distributed team take a paper concept to an operational prototype in approximately one year. One of the major tools used in the integration strategy was a coordinated effort to accurately model all the subsystems using computer aided design (CAD), so conflicts were identified before physical components came together. A deliberate effort was made following the deployment of the HDU PEM for field operations to collect lessons learned to facilitate process improvement and inform the design of future flight or analog versions of habitat systems. Significant items within those lessons learned were limitations with the CAD integration approach and the impact of shell design on flexibility of placing systems within the HDU shell.

  5. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    NASA Astrophysics Data System (ADS)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  6. Relation of Phanerozoic stable isotope excursions to climate, bacterial metabolism, and major extinctions.

    PubMed

    Stanley, Steven M

    2010-11-01

    Conspicuous global stable carbon isotope excursions that are recorded in marine sedimentary rocks of Phanerozoic age and were associated with major extinctions have generally paralleled global stable oxygen isotope excursions. All of these phenomena are therefore likely to share a common origin through global climate change. Exceptional patterns for carbon isotope excursions resulted from massive carbon burial during warm intervals of widespread marine anoxic conditions. The many carbon isotope excursions that parallel those for oxygen isotopes can to a large degree be accounted for by the Q10 pattern of respiration for bacteria: As temperature changed along continental margins, where ∼90% of marine carbon burial occurs today, rates of remineralization of isotopically light carbon must have changed exponentially. This would have reduced organic carbon burial during global warming and increased it during global cooling. Also contributing to the δ(13)C excursions have been release and uptake of methane by clathrates, the positive correlation between temperature and degree of fractionation of carbon isotopes by phytoplankton at temperatures below ∼15°, and increased phytoplankton productivity during "icehouse" conditions. The Q10 pattern for bacteria and climate-related changes in clathrate volume represent positive feedbacks for climate change. PMID:21041682

  7. Relation of Phanerozoic stable isotope excursions to climate, bacterial metabolism, and major extinctions

    PubMed Central

    Stanley, Steven M.

    2010-01-01

    Conspicuous global stable carbon isotope excursions that are recorded in marine sedimentary rocks of Phanerozoic age and were associated with major extinctions have generally paralleled global stable oxygen isotope excursions. All of these phenomena are therefore likely to share a common origin through global climate change. Exceptional patterns for carbon isotope excursions resulted from massive carbon burial during warm intervals of widespread marine anoxic conditions. The many carbon isotope excursions that parallel those for oxygen isotopes can to a large degree be accounted for by the Q10 pattern of respiration for bacteria: As temperature changed along continental margins, where ∼90% of marine carbon burial occurs today, rates of remineralization of isotopically light carbon must have changed exponentially. This would have reduced organic carbon burial during global warming and increased it during global cooling. Also contributing to the δ13C excursions have been release and uptake of methane by clathrates, the positive correlation between temperature and degree of fractionation of carbon isotopes by phytoplankton at temperatures below ∼15°, and increased phytoplankton productivity during “icehouse” conditions. The Q10 pattern for bacteria and climate-related changes in clathrate volume represent positive feedbacks for climate change. PMID:21041682

  8. A long-term data set for hydrologic modeling in a snow-dominated mountain catchment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An hourly modeling data set is presented for the water years 1984 through 2008 for a snow-dominated headwater catchment. Meteorological forcing data and GIS watershed characteristics are described and provided. The meteorological data are measured at two sites within the catchment, and include pre...

  9. Brain extraction from cerebral MRI volume using a hybrid level set based active contour neighborhood model

    PubMed Central

    2013-01-01

    Background The extraction of brain tissue from cerebral MRI volume is an important pre-procedure for neuroimage analyses. The authors have developed an accurate and robust brain extraction method using a hybrid level set based active contour neighborhood model. Methods The method uses a nonlinear speed function in the hybrid level set model to eliminate boundary leakage. When using the new hybrid level set model an active contour neighborhood model is applied iteratively in the neighborhood of brain boundary. A slice by slice contour initial method is proposed to obtain the neighborhood of the brain boundary. The method was applied to the internet brain MRI data provided by the Internet Brain Segmentation Repository (IBSR). Results In testing, a mean Dice similarity coefficient of 0.95±0.02 and a mean Hausdorff distance of 12.4±4.5 were obtained when performing our method across the IBSR data set (18 × 1.5 mm scans). The results obtained using our method were very similar to those produced using manual segmentation and achieved the smallest mean Hausdorff distance on the IBSR data. Conclusions An automatic method of brain extraction from cerebral MRI volume was achieved and produced competitively accurate results. PMID:23587217

  10. Video Self-Modeling: A Job Skills Intervention with Individuals with Intellectual Disability in Employment Settings

    ERIC Educational Resources Information Center

    Goh, Ailsa E.; Bambara, Linda M.

    2013-01-01

    The purpose of this study was to explore the effectiveness of video self-modeling (VSM) to teach chained job tasks to individuals with intellectual disability in community-based employment settings. Initial empirical evaluations have demonstrated that VSM when used in combination with other instructional strategies, are effective methods to teach…

  11. Brief Report: Predictors of Outcomes in the Early Start Denver Model Delivered in a Group Setting

    ERIC Educational Resources Information Center

    Vivanti, Giacomo; Dissanayake, Cheryl; Zierhut, Cynthia; Rogers, Sally J.

    2013-01-01

    There is a paucity of studies that have looked at factors associated with responsiveness to interventions in preschoolers with autism spectrum disorder (ASD). We investigated learning profiles associated with response to the Early Start Denver Model delivered in a group setting. Our preliminary results from 21 preschool children with an ASD aged…

  12. Comparison of Fuzzy Set and Convex Model Theories in Structural Design

    NASA Astrophysics Data System (ADS)

    Pantelides, Chris P.; Ganzerli, Sara

    2001-05-01

    A methodology for the treatment of uncertainty in the loads applied to a structural system using convex models is presented and is compared to the fuzzy set finite-element method. The analytical results for a beam, a truss and a frame structure indicate that the two methods based on convex model or fuzzy set theory are in good agreement for equivalent levels of uncertainty applied to linear structures. Convex model or fuzzy set theories have shown that the worst-case scenario response of all possible load combinations cannot be captured simply by load factorisation, as is the current design practice in building codes. Design problems including uncertainty with a large number of degrees of freedom, that are not computationally feasible using conventional methods described in building codes, can be solved easily using convex model or fuzzy set theory. These results can be used directly and efficiently in the analyses required for the optimal design of structural systems, thus enabling optimisation of complex structural systems with uncertainty.

  13. Goal Setting and Performance Evaluation with Different Starting Positions: The Modeling Dilemma.

    ERIC Educational Resources Information Center

    Pray, Thomas F.; Gold, Steven

    1991-01-01

    Reviews 10 computerized business simulations used to teach business policy courses, discusses problems with measuring performance, and presents a statistically based approach to assessing performance that permits individual team goal setting as part of the computer model, and allows simulated firms to start with different financial and operating…

  14. Invariance, Artifact, and the Psychological Setting of Rasch's Model: Comments on Engelhard

    ERIC Educational Resources Information Center

    Michell, Joel

    2008-01-01

    In the following, I confine my comments mainly to the issue of invariance in relation to Rasch's model for dichotomous, ability test items. "It is senseless to seek in the logical process of mathematical elaboration a psychologically significant precision that was not present in the psychological setting of the problem." (Boring, 1920)

  15. Breaking Bad News in Counseling: Applying the PEWTER Model in the School Setting

    ERIC Educational Resources Information Center

    Keefe-Cooperman, Kathleen; Brady-Amoon, Peggy

    2013-01-01

    Breaking bad news is a stressful experience for counselors and clients. In this article, the PEWTER (Prepare, Evaluate, Warning, Telling, Emotional Response, Regrouping) model (Nardi & Keefe-Cooperman, 2006) is used as a guide to facilitate the process of a difficult conversation and promote client growth in a school setting. In this…

  16. Paradigms, Mental Models, and Mind-Sets: Triple Barriers to Transformational Change in School Systems

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    2014-01-01

    This article presents a simile for understanding the power of paradigms, mental models, and mind-sets as religion-like phenomena. The author clarifies the meaning of the three phenomena to help readers to see how the phenomena become significant sources of resistance to change. He concludes by outlining a paradigm-shifting process to assist…

  17. Technology Adoption Applied to Educational Settings: Predicting Interventionists' Use of Video-Self Modeling

    ERIC Educational Resources Information Center

    Heckman, Andrew R.

    2010-01-01

    Technology provides educators with a significant advantage in working with today's students. One particular application of technology for the purposes of academic and behavioral interventions is the use of video self-modeling (VSM). Although VSM is an evidence-based intervention, it is rarely used in educational settings. The present research…

  18. Portable and Accessible Video Modeling: Teaching a Series of Novel Skills within School and Community Settings

    ERIC Educational Resources Information Center

    Taber-Doughty, Teresa; Miller, Bridget; Shurr, Jordan; Wiles, Benjamin

    2013-01-01

    This study examined the effectiveness of self-operated video models on the skill acquisition of a series of novel tasks taught in community-based settings. In addition, the percent of independent task transitions and the duration at which four secondary students with a moderate intellectual disability transitioned between tasks was also examined.…

  19. Sensitivity of the properties of ruthenium ``blue dimer'' to method, basis set, and continuum model

    NASA Astrophysics Data System (ADS)

    Ozkanlar, Abdullah; Clark, Aurora E.

    2012-05-01

    The ruthenium "blue dimer" [(bpy)2RuIIIOH2]2O4+ is best known as the first well-defined molecular catalyst for water oxidation. It has been subject to numerous computational studies primarily employing density functional theory. However, those studies have been limited in the functionals, basis sets, and continuum models employed. The controversy in the calculated electronic structure and the reaction energetics of this catalyst highlights the necessity of benchmark calculations that explore the role of density functionals, basis sets, and continuum models upon the essential features of blue-dimer reactivity. In this paper, we report Kohn-Sham complete basis set (KS-CBS) limit extrapolations of the electronic structure of "blue dimer" using GGA (BPW91 and BP86), hybrid-GGA (B3LYP), and meta-GGA (M06-L) density functionals. The dependence of solvation free energy corrections on the different cavity types (UFF, UA0, UAHF, UAKS, Bondi, and Pauling) within polarizable and conductor-like polarizable continuum model has also been investigated. The most common basis sets of double-zeta quality are shown to yield results close to the KS-CBS limit; however, large variations are observed in the reaction energetics as a function of density functional and continuum cavity model employed.

  20. Analyzing Academic Achievement of Junior High School Students by an Improved Rough Set Model

    ERIC Educational Resources Information Center

    Pai, Ping-Feng; Lyu, Yi-Jia; Wang, Yu-Min

    2010-01-01

    Rough set theory (RST) is an emerging technique used to deal with problems in data mining and knowledge acquisition. However, the RST approach has not been widely explored in the field of academic achievement. This investigation developed an improved RST (IMRST) model, which employs linear discriminant analysis to determine a reduct of RST, and…

  1. Rejoinder: Evaluating Standard Setting Methods Using Error Models Proposed by Schulz

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    2006-01-01

    Schulz (2006) provides a different perspective on standard setting than that provided in Reckase (2006). He also suggests a modification to the bookmark procedure and some alternative models for errors in panelists' judgments than those provided by Reckase. This article provides a response to some of the points made by Schulz and reports some…

  2. Robust Point Sets Matching by Fusing Feature and Spatial Information Using Nonuniform Gaussian Mixture Models.

    PubMed

    Tao, Wenbing; Sun, Kun

    2015-11-01

    Most of the traditional methods that handle the point sets matching between two images are based on local feature descriptors and the succedent mismatch eliminating strategies, which usually suffers from the sparsity of the initial match set because some correct ambiguous associations are easily filtered out by the ratio test of SIFT matching due to their second ranking in feature similarity. In this paper, we propose a nonuniform Gaussian mixture model (NGMM) for point sets matching between a pair of images which combines feature with position information of the local feature points extracted from the image pair to achieve point sets matching in a GMM framework. The proposed point set matching using an NGMM is able to change the correspondence assignments throughout the matching process and has the potential to match up even ambiguous matches correctly. The proposed NGMM framework can be either used to directly find matches between two point sets obtained from two images or applied to remove outliers in a match set. When finding matches, NGMM tries to learn a nonrigid transformation between the two point sets and provide a probability for every found match to measure the reliability of the match. Then, a probability threshold can be used to get the final robust match set. When removing outliers, NGMM requires that the vector field formed by the correct matches to be coherent and the matches contradicting the coherent vector field will be regarded as mismatches to be removed. A number of comparison and evaluation experiments reveal the good performance of the proposed NGMM framework in both finding matches and discarding mismatches. PMID:26111398

  3. Quantum algorithms for spin models and simulable gate sets for quantum computation

    NASA Astrophysics Data System (ADS)

    van den Nest, M.; Dür, W.; Raussendorf, R.; Briegel, H. J.

    2009-11-01

    We present simple mappings between classical lattice models and quantum circuits, which provide a systematic formalism to obtain quantum algorithms to approximate partition functions of lattice models in certain complex-parameter regimes. We, e.g., present an efficient quantum algorithm for the six-vertex model as well as a two-dimensional Ising-type model. We show that classically simulating these (complex-parameter) spin models is as hard as simulating universal quantum computation, i.e., BQP complete (BQP denotes bounded-error quantum polynomial time). Furthermore, our mappings provide a framework to obtain efficiently simulable quantum gate sets from exactly solvable classical models. We, e.g., show that the simulability of Valiant’s match gates can be recovered by using the solvability of the free-fermion eight-vertex model.

  4. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    SciTech Connect

    Kumar, Prashant; Bansod, Baban K.S.; Debnath, Sanjit K.; Thakur, Praveen Kumar; Ghanshyam, C.

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.

  5. Using the Many-Facet Rasch Model to Evaluate Standard-Setting Judgments: Setting Performance Standards for Advanced Placement® Examinations

    ERIC Educational Resources Information Center

    Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary

    2012-01-01

    The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…

  6. Validation of the U.S. NRC coupled code system TRITON/TRACE/PARCS with the special power excursion reactor test III (SPERT III)

    SciTech Connect

    Wang, R. C.; Xu, Y.; Downar, T.; Hudson, N.

    2012-07-01

    The Special Power Excursion Reactor Test III (SPERT III) was a series of reactivity insertion experiments conducted in the 1950's. This paper describes the validation of the U.S. NRC Coupled Code system TRITON/PARCS/TRACE to simulate reactivity insertion accidents (RIA) by using several of the SPERT III tests. The work here used the SPERT III E-core configuration tests in which the RIA was initiated by ejecting a control rod. The resulting super-prompt reactivity excursion and negative reactivity feedback produced the familiar bell shaped power increase and decrease. The energy deposition during such a power peak has important safety consequences and provides validation basis for core coupled multi-physics codes. The transients of five separate tests are used to benchmark the PARCS/TRACE coupled code. The models were thoroughly validated using the original experiment documentation. (authors)

  7. Coupled level set segmentation using a point-based statistical shape model relying on correspondence probabilities

    NASA Astrophysics Data System (ADS)

    Hufnagel, Heike; Ehrhardt, Jan; Pennec, Xavier; Schmidt-Richberg, Alexander; Handels, Heinz

    2010-03-01

    In this article, we propose a unified statistical framework for image segmentation with shape prior information. The approach combines an explicitely parameterized point-based probabilistic statistical shape model (SSM) with a segmentation contour which is implicitly represented by the zero level set of a higher dimensional surface. These two aspects are unified in a Maximum a Posteriori (MAP) estimation where the level set is evolved to converge towards the boundary of the organ to be segmented based on the image information while taking into account the prior given by the SSM information. The optimization of the energy functional obtained by the MAP formulation leads to an alternate update of the level set and an update of the fitting of the SSM. We then adapt the probabilistic SSM for multi-shape modeling and extend the approach to multiple-structure segmentation by introducing a level set function for each structure. During segmentation, the evolution of the different level set functions is coupled by the multi-shape SSM. First experimental evaluations indicate that our method is well suited for the segmentation of topologically complex, non spheric and multiple-structure shapes. We demonstrate the effectiveness of the method by experiments on kidney segmentation as well as on hip joint segmentation in CT images.

  8. Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming

    PubMed Central

    Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio

    2013-01-01

    Motivation: Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. Results: We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input–output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. Availability: caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary information: Supplementary materials are available at Bioinformatics online. Contact: santiago.videla@irisa.fr PMID:23853063

  9. Many Parameter Sets in a Multicompartment Model Oscillator Are Robust to Temperature Perturbations

    PubMed Central

    Caplan, Jonathan S.; Williams, Alex H.

    2014-01-01

    Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca2+ buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714

  10. Many parameter sets in a multicompartment model oscillator are robust to temperature perturbations.

    PubMed

    Caplan, Jonathan S; Williams, Alex H; Marder, Eve

    2014-04-01

    Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca(2+) buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714

  11. Generation of pareto optimal ensembles of calibrated parameter sets for climate models.

    SciTech Connect

    Dalbey, Keith R.; Levy, Michael Nathan

    2010-12-01

    Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.

  12. Contributing factors to star excursion balance test performance in individuals with chronic ankle instability.

    PubMed

    Gabriner, Michael L; Houston, Megan N; Kirby, Jessica L; Hoch, Matthew C

    2015-05-01

    The purpose of this study was to determine the contributions of strength, dorsiflexion range of motion (DFROM), plantar cutaneous sensation (PCS), and static postural control to Star Excursion Balance Test (SEBT) performance in individuals with chronic ankle instability (CAI). Forty individuals with CAI completed isometric strength, weight-bearing DFROM, PCS, static and dynamic balance assessments. Three separate backward multiple linear regression models were calculated to determine how strength, DFROM, PCS, and static postural control contributed to each reach direction of the SEBT. Explanatory variables included dorsiflexion, inversion, and eversion strength, DFROM, PCS, and time-to-boundary mean minima (TTBMM) and standard deviation (TTBSD) in the medial-lateral (ML) and anterior-posterior (AP) directions. Criterion variables included SEBT-anterior, posteromedial, and posterolateral directions. The strength of each model was determined by the R2-value and Cohen's f2 effect size. Regression models with an effect size ≥0.15 were considered clinically relevant. All three SEBT directions produced clinically relevant regression models. DFROM and PCS accounted for 16% of the variance in SEBT-anterior reach (f2=0.19, p=0.04). Eversion strength and TTBMM-ML accounted for 28% of the variance in SEBT-posteromedial reach (f2=0.39, p<0.01). Eversion strength and TTBSD-ML accounted for 14% of the variance in SEBT-posterolateral reach (f2=0.16, p=0.06). DFROM and PCS explained a clinically relevant proportion of the variance associated with SEBT-anterior reach. Eversion strength and TTB ML explained a clinically relevant proportion of the variance in SEBT-posteromedial and posterolateral reach distances. Therefore, rehabilitation strategies should emphasize DFROM, PCS, eversion strength, and static balance to enhance dynamic postural control in patients with CAI. PMID:25845724

  13. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  14. Towards deep inclusion for equity-oriented health research priority-setting: A working model.

    PubMed

    Pratt, Bridget; Merritt, Maria; Hyder, Adnan A

    2016-02-01

    Growing consensus that health research funders should align their investments with national research priorities presupposes that such national priorities exist and are just. Arguably, justice requires national health research priority-setting to promote health equity. Such a position is consistent with recommendations made by the World Health Organization and at global ministerial summits that health research should serve to reduce health inequalities between and within countries. Thus far, no specific requirements for equity-oriented research priority-setting have been described to guide policymakers. As a step towards the explication and defence of such requirements, we propose that deep inclusion is a key procedural component of equity-oriented research priority-setting. We offer a model of deep inclusion that was developed by applying concepts from work on deliberative democracy and development ethics. This model consists of three dimensions--breadth, qualitative equality, and high-quality non-elite participation. Deep inclusion is captured not only by who is invited to join a decision-making process but also by how they are involved and by when non-elite stakeholders are involved. To clarify and illustrate the proposed dimensions, we use the sustained example of health systems research. We conclude by reviewing practical challenges to achieving deep inclusion. Despite the existence of barriers to implementation, our model can help policymakers and other stakeholders design more inclusive national health research priority-setting processes and assess these processes' depth of inclusion. PMID:26812416

  15. GeneTopics - interpretation of gene sets via literature-driven topic models

    PubMed Central

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly

  16. Constitutive modeling of Radiation effects on the Permanent Set in a silicone elastomer

    SciTech Connect

    Maiti, A; Gee, R; Weisgraber, T; Chinn, S; Maxwell, R

    2008-03-10

    When a networked polymeric composite under high stress is subjected to irradiation, the resulting chemical changes like chain scissioning and cross-link formation can lead to permanent set and altered elastic modulus. Using a commercial silicone elastomer as a specific example we show that a simple 2-stage Tobolsky model in conjunction with Fricker's stress-transfer function can quantitatively reproduce all experimental data as a function of radiation dosage and the static strain at which radiation is turned on, including permanent set, stress-strain response, and net cross-linking density.

  17. Mathematical modeling of vibrations in turbogenerator sets of Sayano-Shushenskaya Hydroelectric Power Station

    NASA Astrophysics Data System (ADS)

    Leonov, G. A.; Kuznetsov, N. V.; Solovyeva, E. P.

    2016-02-01

    Oscillations in turbogenerator sets, which consist of a synchronous generator, a hydraulic turbine, and an automatic speed regulator, are investigated. This study was motivated by the emergency that took place at the Sayano-Shushenskaya Hydroelectric Power Station in 2009. During modeling of the parameters of turbogenerator sets of the Sayano-Shushenskaya Hydroelectric Power Station, the ranges corresponding to undesired oscillation regimes were determined. These ranges agree with the results of the full-scale tests of the hydropower units of the Sayano-Shushenskaya Hydroelectric Power Station performed in 1988.

  18. Camera response prediction for various capture settings using the spectral sensitivity and crosstalk model.

    PubMed

    Qiu, Jueqin; Xu, Haisong

    2016-09-01

    In this paper, a camera response formation model is proposed to accurately predict the responses of images captured under various exposure settings. Differing from earlier works that estimated the camera relative spectral sensitivity, our model constructs the physical spectral sensitivity curves and device-dependent parameters that convert the absolute spectral radiances of target surfaces to the camera readout responses. With this model, the camera responses to miscellaneous combinations of surfaces and illuminants could be accurately predicted. Thus, creating an "imaging simulator" by using the colorimetric and photometric research based on the cameras would be of great convenience. PMID:27607275

  19. Risk factor model to predict a missed clinic appointment in an urban, academic, and underserved setting.

    PubMed

    Torres, Orlando; Rothberg, Michael B; Garb, Jane; Ogunneye, Owolabi; Onyema, Judepatricks; Higgins, Thomas

    2015-04-01

    In the chronic care model, a missed appointment decreases continuity, adversely affects practice efficiency, and can harm quality of care. The aim of this study was to identify predictors of a missed appointment and develop a model to predict an individual's likelihood of missing an appointment. The research team performed a retrospective study in an urban, academic, underserved outpatient internal medicine clinic from January 2008 to June 2011. A missed appointment was defined as either a "no-show" or cancellation within 24 hours of the appointment time. Both patient and visit variables were considered. The patient population was randomly divided into derivation and validation sets (70/30). A logistic model from the derivation set was applied in the validation set. During the period of study, 11,546 patients generated 163,554 encounters; 45% of appointments in the derivation sample were missed. In the logistic model, percent previously missed appointments, wait time from booking to appointment, season, day of the week, provider type, and patient age, sex, and language proficiency were all associated with a missed appointment. The strongest predictors were percentage of previously missed appointments and wait time. Older age and non-English proficiency both decreased the likelihood of missing an appointment. In the validation set, the model had a c-statistic of 0.71, and showed no gross lack of fit (P=0.63), indicating acceptable calibration. A simple risk factor model can assist in predicting the likelihood that an individual patient will miss an appointment. PMID:25299396

  20. Model choice problems using approximate Bayesian computation with applications to pathogen transmission data sets.

    PubMed

    Lee, Xing Ju; Drovandi, Christopher C; Pettitt, Anthony N

    2015-03-01

    Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1-28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens. PMID:25303085

  1. Regional primitive equation modeling and analysis of the polymode data set

    NASA Astrophysics Data System (ADS)

    Spall, Michael A.

    A regional, hybrid coordinate, primitive equation (PE) model is applied to a 60-day period of the POLYMODE data set. The initialization techniques and open boundary conditions introduced by Spall and Robinson are shown to produce stable, realistic, and reasonably accurate hindcasts for the 2-month data set. Comparisons with quasi-geostrophic (QG) modeling studies indicate that the PE model reproduced the jet formation that dominates the region more accurately than did the QG model. When the PE model used boundary conditions that were partially adjusted by the QG model, the resulting fields were very similar to the QG fields, indicating a rapid degradation of small-scale features near the boundaries in the QG calculation. A local term-by-term primitive equation energy and vorticity analysis package is also introduced. The full vorticity, horizontal divergence, kinetic energy, and available gravitational energy equations are solved diagnostically from the output of the regional PE model. Through the analysis of a time series of horizontal maps, the dominant processes in the flow are illustrated. The individual terms are also integrated over the region of jet formation to highlight the net balances as a function of time. The formation of the deep thermocline jet is shown to be due to horizontal advection through the boundary, baroclinic conversion in the deep thermocline and vertical pressure work, which exports the deep energy to the upper thermocline levels. It is concluded here that the PE model reproduces the observed jet formation better than the QG model because of the increased horizontal advection and stronger vertical pressure work. Although the PE model is shown to be superior to the QG model in this application, it is believed that both PE and QG models can play an important role in the regional study of mid-ocean mesoscale eddies.

  2. Head Excursion of Restrained Human Volunteers and Hybrid III Dummies in Steady State Rollover Tests

    PubMed Central

    Moffatt, Edward; Hare, Barry; Hughes, Raymond; Lewis, Lance; Iiyama, Hiroshi; Curzon, Anne; Cooper, Eddie

    2003-01-01

    Seatbelts provide substantial benefits in rollover crashes, yet occupants still receive head and neck injuries from contacting the vehicle roof interior when the roof exterior strikes the ground. Prior research has evaluated rollover restraint performance utilizing anthropomorphic test devices (dummies), but little dynamic testing has been done with human volunteers to learn how they move during rollovers. In this study, the vertical excursion of the head of restrained dummies and human subjects was measured in a vehicle being rotated about its longitudinal roll axis at roll rates from 180-to-360 deg/sec and under static inversion conditions. The vehicle’s restraint design was the commonly used 3-point seatbelt with continuous loop webbing and a sliding latch plate. This paper presents an analysis of the observed occupant motion and provides a comparison of dummy and human motion under similar test conditions. Thirty-five tests (eighteen static and seventeen dynamic) were completed using two different sizes of dummies and human subjects in both near and far-side roll directions. The research indicates that far-side rollovers cause the restrained test subjects to have greater head excursion than near-side rollovers, and that static inversion testing underestimates head excursion for far-side occupants. Human vertical head excursion of up to 200 mm was found at a roll rate of 220 deg/sec. Humans exhibit greater variability in head excursion in comparison to dummies. Transfer of seatbelt webbing through the latch plate did not correlate directly with differences in head excursion. PMID:12941241

  3. Improved Quaternary North Atlantic stratigraphy using relative paleointensity (RPI), oxygen isotopes, and magnetic excursions (Invited)

    NASA Astrophysics Data System (ADS)

    Channell, J. E.

    2013-12-01

    Improving the resolution of Quaternary marine stratigraphy is one of the major challenges in paleoceanography. IODP Expedition 303/306, and ODP Legs 162 and 172, have yielded multiple high-resolution records (mean sedimentation rates in the 7-20 cm/kyr range) of relative paleointensity (RPI) that are accompanied by oxygen isotope data and extend through much of the Quaternary. Tandem fit of RPI and oxygen isotope data to calibrated templates (LR04 and PISO), using the Match protocol, yields largely consistent stratigraphies, implying that both RPI and oxygen isotope data are dominated by regional/global signals. Based on the recent geomagnetic field, RPI can be expected to be a global signal (i.e. dominated by the axial dipole field) when recorded at sedimentation rates less than several decimeters/kyr. Magnetic susceptibility, on the other hand, is a local/regional lithologic signal, and therefore less useful for long-distance correlation. Magnetic excursions are directional phenomena and, when adequately recorded, are manifest as paired reversals in which the virtual geomagnetic poles (VGPs) reach high latitudes in the opposite hemisphere, and they occupy minima in RPI records. Reversed VGPs imply that excursions are attributable to the main axial dipole, and therefore provide global stratigraphy. The so-called Iceland Basin excursion is recorded at many IODP/ODP sites and lies at the MIS 6/7 boundary at ~188 ka, with a duration of 2-3 kyr. Other excursions in the Brunhes chron are less commonly recorded because their duration (perhaps <~1 kyr) requires sedimentation rates >20 cm/kyr to be adequately recorded. On the other hand, several excursions within the Matuyama Chron are more commonly recorded in North Atlantic drift sediments due to relatively elevated durations. With some notable exceptions (e.g. Iberian Margin), high quality RPI records from North Atlantic sediments, together with magnetic excursions, can be used in tandem with oxygen isotope data to

  4. Dynamic changes in sulfate sulfur isotopes preceding the Ediacaran Shuram Excursion

    NASA Astrophysics Data System (ADS)

    Osburn, Magdalena R.; Owens, Jeremy; Bergmann, Kristin D.; Lyons, Timothy W.; Grotzinger, John P.

    2015-12-01

    Large excursions in δ13C and δ34S are found in sedimentary rocks from the Ediacaran Period that may provide detailed mechanistic information about oxidation of Earth's surface. However, poor stratigraphic resolution and diagenetic concerns have thus far limited the interpretation of these records. Here, we present a high-resolution record of carbon and sulfur isotopes from the Khufai Formation, leading up to and including the onset of the Shuram carbon isotope excursion. We document large coherent excursions in the sulfur isotope composition and concentration of carbonate-associated sulfate (CAS) that occur both independently and synchronously with the carbon isotope excursion. Isotopic changes appear decoupled from major stratigraphic surfaces and facies changes, suggesting regional or global processes rather than local controls. Our data suggest that very low marine sulfate concentrations are maintained at least through the middle-Khufai Formation and require that the burial fraction of pyrite and the fractionation factor between sulfate and pyrite necessarily change through deposition. Reconciliation of simultaneous, up-section increases in marine sulfate concentration and δ34SCAS requires the introduction of strongly 34S-enriched sulfate, possibly from weathering of Cryogenian and earlier Ediacaran 34S-enriched pyrite. Our analysis of the onset of the Shuram carbon isotope excursion, observed in stratigraphic and lithologic context, is not consistent with diagenetic or authigenic formation mechanisms. Instead, we observe a contemporaneous negative excursion in sulfate δ34S suggesting linked primary perturbations to the carbon and sulfur isotope systems. This work further constrains the size, isotopic composition, and potential input fluxes of the Ediacaran marine sulfate reservoir, placing mechanistic constraints on possible drivers of extreme isotopic perturbations during this critical period in Earth history.

  5. Statistical evaluation of a new air dispersion model against AERMOD using the Prairie Grass data set.

    PubMed

    Armani, Fernando Augusto Silveira; de Almeida, Ricardo Carvalho; Dias, Nelson Luís da Costa

    2014-02-01

    In this work, the authors present a statistical assessment of two atmospheric dispersion models. One of them, AERMOD (American Meteorological Society/Environmental Protection Agency Regulatory Model), adopted by the US. Environmental Protection Agency, is widely used in many countries and here is taken as a baseline to assess the performance of a newly proposed model, MODELAR (Modelo Regulatório de Qualidade do Ar). In terms of parameterizations and modeling options, MODELAR is a somewhat simple model. It is currently being considered for adoption as the regulatory model in Paraná State, Brazil. The well-known Prairie Grass data set, already used in earlier evaluations of the same version of AERMOD analyzed here, was used to perform model assessment. The evaluations employed well-established statistical performance descriptors and techniques. The results indicate that MODELAR is a slightly better predictor, for the Prairie Grass data set, of concentrations under unstable conditions, whereas AERMOD has a better performance under near-neutral and stable conditions. Moreover cases of severe overestimation and underestimation, as detected by the Factor of Two index, are clearly associated with extreme stability conditions (both unstable and stable), stressing the need for better parameterizations under these conditions. PMID:24654389

  6. Description of a practice model for pharmacist medication review in a general practice setting

    PubMed Central

    Brandt, Mette; Hallas, Jesper; Graabæk, Trine; Pottegård, Anton

    2014-01-01

    Background Practical descriptions of procedures used for pharmacists’ medication reviews are sparse. Objective To describe a model for medication review by pharmacists tailored to a general practice setting. Methods A stepwise model is described. The model is based on data from the medical chart and clinical or laboratory data. The medication review focuses on the diagnoses of the patient instead of the individual drugs. Patient interviews are not part of the model. The model was tested in a pilot study by conducting medical reviews on 50 polypharmacy patients (i.e. receiving 7 or more drugs for regular use). Results The model contained seven main steps. Information about the patient and current treatment was collected in the first three steps, followed by identification of possible interventions related to either diagnoses or drugs in the fourth and fifth step. The sixth and seventh step concerned the reporting of interventions and the considerations of the GPs. 208 interventions were proposed among the 50 patients. The acceptance rate among the GPs was 82%. The most common interventions were lack of clinical or laboratory data (n=57, 27%) and drugs that should be discontinued as they had no indication (n=47, 23%). Most interventions were aimed at cardiovascular drugs. Conclusion We have provided a detailed description of a practical approach to pharmacists’ medication review in a GP setting. The model was tested and found to be usable, and to deliver a medication review with high acceptance rates. PMID:25243030

  7. A Model Plan for the Supervision and Evaluation of Therapy Services in Educational Settings. TIES: Therapy in Educational Settings.

    ERIC Educational Resources Information Center

    Reed, Penny; And Others

    The manual serves as a model for school districts developing procedures for supervising and evaluating their therapy services. The narrative is addressed to therapists rather than supervisors so that school districts can photocopy or adapt sections of the manual and assemble customized manuals for therapists in their programs. The first chapter,…

  8. ImSET 3.1: Impact of Sector Energy Technologies Model Description and User's Guide

    SciTech Connect

    Scott, Michael J.; Livingston, Olga V.; Balducci, Patrick J.; Roop, Joseph M.; Schultz, Robert W.

    2009-05-22

    This 3.1 version of the Impact of Sector Energy Technologies (ImSET) model represents the next generation of the previously-built ImSET model (ImSET 2.0) that was developed in 2005 to estimate the macroeconomic impacts of energy-efficient technology in buildings. In particular, a special-purpose version of the Benchmark National Input-Output (I-O) model was designed specifically to estimate the national employment and income effects of the deployment of Office of Energy Efficiency and Renewable Energy (EERE)–developed energy-saving technologies. In comparison with the previous versions of the model, this version features the use of the U.S. Bureau of Economic Analysis 2002 national input-output table and the central processing code has been moved from the FORTRAN legacy operating environment to a modern C++ code. ImSET is also easier to use than extant macroeconomic simulation models and incorporates information developed by each of the EERE offices as part of the requirements of the Government Performance and Results Act. While it does not include the ability to model certain dynamic features of markets for labor and other factors of production featured in the more complex models, for most purposes these excluded features are not critical. The analysis is credible as long as the assumption is made that relative prices in the economy would not be substantially affected by energy efficiency investments. In most cases, the expected scale of these investments is small enough that neither labor markets nor production cost relationships should seriously affect national prices as the investments are made. The exact timing of impacts on gross product, employment, and national wage income from energy efficiency investments is not well-enough understood that much special insight can be gained from the additional dynamic sophistication of a macroeconomic simulation model. Thus, we believe that this version of ImSET is a cost-effective solution to estimating the economic

  9. A paradigm for human body finite element model integration from a set of regional models.

    PubMed

    Thompson, A B; Gayzik, F S; Moreno, D P; Rhyne, A C; Vavalle, N A; Stitzel, J D

    2012-01-01

    Computational modeling offers versatility, scalability, and cost advantages to researchers in the trauma and injury biomechanics communities. The Global Human Body Models Consortium (GHBMC) is a group of government, industry, and academic researchers developing human body models (HBMs) that aim to become the standard tool to meet this growing research need. The objective of this study is to present the methods used to develop the average seated male occupant model (M50, weight = 78 kg, height = 175 cm) from five separately validated body region models (BRMs). BRMs include the head, neck, thorax, abdomen, and a combined pelvis and lower extremity model. Modeling domains were split at the atlanto-occipital joint, C7-T1 boundary, diaphragm, abdominal cavity (peritoneum/retroperitoneum), and the acetabulum respectively. BRM meshes are based on a custom CAD model of the seated male built from a multi-modality imaging protocol of a volunteer subject found in literature.[1] Various meshing techniques were used to integrate the full body model (FBM) including 1-D beam and discrete element connections (e.g. ligamentous structures), 2D shell nodal connections (e.g. inferior vena cava to right atrium), 3D hexahedral nodal connections (e.g. soft tissue envelope connections between regions), and contact definitions varying from tied (muscle insertions) to sliding (liver and diaphragm contact). The model was developed in a general-purpose finite element code, LS-Dyna (LTSC, Livermore, CA) R4.2.1., and consists of 1.95 million elements and 1.3 million nodes. The element breakdown by type is 41% hexahedral, 33.7% tetrahedral, 19.5% quad shells and 5% tria shell. The integration methodology presented highlights the viability of using a collaborative development paradigm for the construction of HBMs, and will be used as template for expanding the suite of GHBMC models. PMID:22846315

  10. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  11. SAF - Sets and Fields parallel I/O and scientific data modeling system

    Energy Science and Technology Software Center (ESTSC)

    2005-07-01

    SAF is being developed as part of the Data Models and Formats (DMF) component of the Accelerated Strategic Computing Initiative (ASCI). SAF represents a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or mesh objects to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshmore » objects. SAF addresses this problem by providing a small set of mathematical building blocks, sets, relations and fields, out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. Sets and fields building blocks are at once, both primitive and abstract: * They are primitive enough to model a wide variety of scientific data. * They are abstract enough to model the data in terms of what it represents in a mathematical or physical sense independent of how it is represented in an implementation sense. For example, while there are many ways to represent the airflow over the wing of a supersonic aircraft in a computer program, there is only one mathematical/physical interpretation: a field of 3D velocity vectors over a 2D surface. This latter description is immutable. It is independent of any particular representation or implementation choices. Understanding this what versus how relationship, that is what is represented versus how it is represented, is key to developing a solution for large scale integration of scientific software.« less

  12. SAF - Sets and Fields parallel I/O and scientific data modeling system

    SciTech Connect

    Matzke, Robb; Illescas, Eric; Espen, Peter; Jones, Jake S.; Sjaardema, Gregory; Miller, Mark C.; Schoof, Larry A.; Reus, James F.; Arrighi, William; Hitt, Ray T.; O'Brien, Matthew J.

    2005-07-01

    SAF is being developed as part of the Data Models and Formats (DMF) component of the Accelerated Strategic Computing Initiative (ASCI). SAF represents a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or mesh objects to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or mesh objects. SAF addresses this problem by providing a small set of mathematical building blocks, sets, relations and fields, out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. Sets and fields building blocks are at once, both primitive and abstract: * They are primitive enough to model a wide variety of scientific data. * They are abstract enough to model the data in terms of what it represents in a mathematical or physical sense independent of how it is represented in an implementation sense. For example, while there are many ways to represent the airflow over the wing of a supersonic aircraft in a computer program, there is only one mathematical/physical interpretation: a field of 3D velocity vectors over a 2D surface. This latter description is immutable. It is independent of any particular representation or implementation choices. Understanding this what versus how relationship, that is what is represented versus how it is represented, is key to developing a solution for large scale integration of scientific software.

  13. On Numerical Aspects of Bayesian Model Selection in High and Ultrahigh-dimensional Settings

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    This article examines the convergence properties of a Bayesian model selection procedure based on a non-local prior density in ultrahigh-dimensional settings. The performance of the model selection procedure is also compared to popular penalized likelihood methods. Coupling diagnostics are used to bound the total variation distance between iterates in an Markov chain Monte Carlo (MCMC) algorithm and the posterior distribution on the model space. In several simulation scenarios in which the number of observations exceeds 100, rapid convergence and high accuracy of the Bayesian procedure is demonstrated. Conversely, the coupling diagnostics are successful in diagnosing lack of convergence in several scenarios for which the number of observations is less than 100. The accuracy of the Bayesian model selection procedure in identifying high probability models is shown to be comparable to commonly used penalized likelihood methods, including extensions of smoothly clipped absolute deviations (SCAD) and least absolute shrinkage and selection operator (LASSO) procedures. PMID:24683431

  14. Non-Rigid Object Contour Tracking via a Novel Supervised Level Set Model.

    PubMed

    Sun, Xin; Yao, Hongxun; Zhang, Shengping; Li, Dong

    2015-11-01

    We present a novel approach to non-rigid objects contour tracking in this paper based on a supervised level set model (SLSM). In contrast to most existing trackers that use bounding box to specify the tracked target, the proposed method extracts the accurate contours of the target as tracking output, which achieves better description of the non-rigid objects while reduces background pollution to the target model. Moreover, conventional level set models only emphasize the regional intensity consistency and consider no priors. Differently, the curve evolution of the proposed SLSM is object-oriented and supervised by the specific knowledge of the targets we want to track. Therefore, the SLSM can ensure a more accurate convergence to the exact targets in tracking applications. In particular, we firstly construct the appearance model for the target in an online boosting manner due to its strong discriminative power between the object and the background. Then, the learnt target model is incorporated to model the probabilities of the level set contour by a Bayesian manner, leading the curve converge to the candidate region with maximum likelihood of being the target. Finally, the accurate target region qualifies the samples fed to the boosting procedure as well as the target model prepared for the next time step. We firstly describe the proposed mechanism of two-phase SLSM for single target tracking, then give its generalized multi-phase version for dealing with multi-target tracking cases. Positive decrease rate is used to adjust the learning pace over time, enabling tracking to continue under partial and total occlusion. Experimental results on a number of challenging sequences validate the effectiveness of the proposed method. PMID:26099142

  15. Impact of CAMEX-4 Data Sets for Hurricane Forecasts using a Global Model

    NASA Technical Reports Server (NTRS)

    Kamineni, Rupa; Krishnamurti, T. N.; Pattnaik, S.; Browell, Edward V.; Ismail, Syed; Ferrare, Richard A.

    2005-01-01

    This study explores the impact on hurricane data assimilation and forecasts from the use of dropsondes and remote-sensed moisture profiles from the airborne Lidar Atmospheric Sensing Experiment (LASE) system. We show that the use of these additional data sets, above those from the conventional world weather watch, has a positive impact on hurricane predictions. The forecast tracks and intensity from the experiments show a marked improvement compared to the control experiment where such data sets were excluded. A study of the moisture budget in these hurricanes showed enhanced evaporation and precipitation over the storm area. This resulted in these data sets making a large impact on the estimate of mass convergence and moisture fluxes, which were much smaller in the control runs. Overall this study points to the importance of high vertical resolution humidity data sets for improved model results. We note that the forecast impact from the moisture profiling data sets for some of the storms is even larger than the impact from the use of dropwindsonde based winds.

  16. Quantile regression model for a diverse set of chemicals: application to acute toxicity for green algae.

    PubMed

    Villain, Jonathan; Lozano, Sylvain; Halm-Lemeille, Marie-Pierre; Durrieu, Gilles; Bureau, Ronan

    2014-12-01

    The potential of quantile regression (QR) and quantile support vector machine regression (QSVMR) was analyzed for the definitions of quantitative structure-activity relationship (QSAR) models associated with a diverse set of chemicals toward a particular endpoint. This study focused on a specific sensitive endpoint (acute toxicity to algae) for which even a narcosis QSAR model is not actually clear. An initial dataset including more than 401 ecotoxicological data for one species of algae (Selenastrum capricornutum) was defined. This set corresponds to a large sample of chemicals ranging from classical organic chemicals to pesticides. From this original data set, the selection of the different subsets was made in terms of the notion of toxic ratio (TR), a parameter based on the ratio between predicted and experimental values. The robustness of QR and QSVMR to outliers was clearly observed, thus demonstrating that this approach represents a major interest for QSAR associated with a diverse set of chemicals. We focused particularly on descriptors related to molecular surface properties. PMID:25431186

  17. Regionalisation of statistical model outputs creating gridded data sets for Germany

    NASA Astrophysics Data System (ADS)

    Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas

    2016-04-01

    The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation

  18. Heuristic method for searches on large data-sets organised using network models

    NASA Astrophysics Data System (ADS)

    Ruiz-Fernández, D.; Quintana-Pacheco, Y.

    2016-05-01

    Searches on large data-sets have become an important issue in recent years. An alternative, which has achieved good results, is the use of methods relying on data mining techniques, such as cluster-based retrieval. This paper proposes a heuristic search that is based on an organisational model that reflects similarity relationships among data elements. The search is guided by using quality estimators of model nodes, which are obtained by the progressive evaluation of the given target function for the elements associated with each node. The results of the experiments confirm the effectiveness of the proposed algorithm. High-quality solutions are obtained evaluating a relatively small percentage of elements in the data-sets.

  19. Are cosmological data sets consistent with each other within the Λ cold dark matter model?

    NASA Astrophysics Data System (ADS)

    Raveri, Marco

    2016-02-01

    We use a complete and rigorous statistical indicator to measure the level of concordance between cosmological data sets, without relying on the inspection of the marginal posterior distribution of some selected parameters. We apply this test to state of the art cosmological data sets, to assess their agreement within the Λ cold dark matter model. We find that there is a good level of concordance between all the experiments with one noticeable exception. There is substantial evidence of tension between the cosmic microwave background temperature and polarization measurements of the Planck satellite and the data from the CFHTLenS weak lensing survey even when applying ultraconservative cuts. These results robustly point toward the possibility of having unaccounted systematic effects in the data, an incomplete modeling of the cosmological predictions or hints toward new physical phenomena.

  20. Effects of a Process-Oriented Goal Setting Model on Swimmer’s Performance

    PubMed Central

    Simões, Paulo; Vasconcelos-Raposo, José; Silva, António; Fernandes, Helder M.

    2012-01-01

    The aim of this work was to study the impact of the implementation of a mental training program on swimmers’ chronometric performance, with national and international Portuguese swimmers, based on the goal setting model proposed by Vasconcelos-Raposo (2001). This longitudinal study comprised a sample of nine swimmers (four male and five female) aged between fourteen and twenty, with five to eleven years of competitive experience. All swimmers were submitted to an evaluation system during two years. The first season involved the implementation of the goal setting model, and the second season was only evaluation, totaling seven assessments over the two years. The main results showed a significant improvement in chronometric performance during psychological intervention, followed by a reduction in swimmers’ performance in the second season, when there was no interference from the investigators (follow-up). PMID:23486284

  1. A set of exactly solvable Ising models with half-odd-integer spin

    NASA Astrophysics Data System (ADS)

    Rojas, Onofre; de Souza, S. M.

    2009-03-01

    We present a set of exactly solvable Ising models, with half-odd-integer spin- S on a square-type lattice including a quartic interaction term in the Hamiltonian. The particular properties of the mixed lattice, associated with mixed half-odd-integer spin- (S,1/2) and only nearest-neighbor interaction, allow us to map this system either onto a purely spin-1/2 lattice or onto a purely spin- S lattice. By imposing the condition that the mixed half-odd-integer spin- (S,1/2) lattice must have an exact solution, we found a set of exact solutions that satisfy the free fermion condition of the eight vertex model. The number of solutions for a general half-odd-integer spin- S is given by S+1/2. Therefore we conclude that this transformation is equivalent to a simple spin transformation which is independent of the coordination number.

  2. A 300 kyr Record of Geomagnetic Excursions and Paleointensity From the Irminger Basin: Candidates for Mono Lake, Laschamp, Iceland Basin, Jamaica and Pringle Falls?

    NASA Astrophysics Data System (ADS)

    Channell, J. E.

    2004-12-01

    Sediments recovered at ODP Site 919, off east Greenland, record geomagnetic directional excursions at 33 ka and 40 ka (Mono Lake and Laschamp), and at 187 ka (Iceland Basin), 208 ka (Jamaica?) and at 220 ka (Pringle Falls). U-channel records are augmented by 1-cm discrete samples collected back-to-back alongside the u-channel troughs. Deconvolution of the u-channel records yields records that can be closely matched to the discrete sample data. The age-model based on planktic oxygen isotope data (St. John et al., Marine Geology, in press) is consistent with the relative paleointensity record and the recognition of Ash Layer 2 (55 ka). The results indicate that the Mono Lake and Laschamp excursions, and the Iceland Basin and Pringle Falls (and perhaps also Jamaica), are distinct excursions, rarely recorded together in individual stratigraphic sections. Why are they recorded at ODP Site 919? Mean sedimentation rates are 22 cm/kyr in MIS 3 where Mono Lake/Laschamp are recorded, but sedimentation rates do not appear to be especially high in MIS 7 (13 cm/kyr) where candidates for Iceland Basin/Jamaica/Pringle Falls are recorded.

  3. Analysis of Atmospheric Aerosol Data Sets and Application of Radiative Transfer Models to Compute Aerosol Effects

    NASA Technical Reports Server (NTRS)

    Schmid, Beat; Bergstrom, Robert W.; Redemann, Jens

    2002-01-01

    This report is the final report for "Analysis of Atmospheric Aerosol Data Sets and Application of Radiative Transfer Models to Compute Aerosol Effects". It is a bibliographic compilation of 29 peer-reviewed publications (published, in press or submitted) produced under this Cooperative Agreement and 30 first-authored conference presentations. The tasks outlined in the various proposals are listed below with a brief comment as to the research performed. Copies of title/abstract pages of peer-reviewed publications are attached.

  4. Multiple data sets and modelling choices in a comparative LCA of disposable beverage cups.

    PubMed

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2014-10-01

    This study used multiple data sets and modelling choices in an environmental life cycle assessment (LCA) to compare typical disposable beverage cups made from polystyrene (PS), polylactic acid (PLA; bioplastic) and paper lined with bioplastic (biopaper). Incineration and recycling were considered as waste processing options, and for the PLA and biopaper cup also composting and anaerobic digestion. Multiple data sets and modelling choices were systematically used to calculate average results and the spread in results for each disposable cup in eleven impact categories. The LCA results of all combinations of data sets and modelling choices consistently identify three processes that dominate the environmental impact: (1) production of the cup's basic material (PS, PLA, biopaper), (2) cup manufacturing, and (3) waste processing. The large spread in results for impact categories strongly overlaps among the cups, however, and therefore does not allow a preference for one type of cup material. Comparison of the individual waste treatment options suggests some cautious preferences. The average waste treatment results indicate that recycling is the preferred option for PLA cups, followed by anaerobic digestion and incineration. Recycling is slightly preferred over incineration for the biopaper cups. There is no preferred waste treatment option for the PS cups. Taking into account the spread in waste treatment results for all cups, however, none of these preferences for waste processing options can be justified. The only exception is composting, which is least preferred for both PLA and biopaper cups. Our study illustrates that using multiple data sets and modelling choices can lead to considerable spread in LCA results. This makes comparing products more complex, but the outcomes more robust. PMID:25037049

  5. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  6. Software tools that facilitate kinetic modelling with large data sets: an example using growth modelling in sugarcane.

    PubMed

    Uys, L; Hofmeyr, J H S; Snoep, J L; Rohwer, J M

    2006-09-01

    A solution to manage cumbersome data sets associated with large modelling projects is described. A kinetic model of sucrose accumulation in sugarcane is used to predict changes in sucrose metabolism with sugarcane internode maturity. This results in large amounts of output data to be analysed. Growth is simulated by reassigning maximal activity values, specific to each internode of the sugarcane plant, to parameter attributes of a model object. From a programming perspective, only one model definition file is required for the simulation software used; however, the amount of input data increases with each extra interrnode that is modelled, and likewise the amount of output data that is generated also increases. To store, manipulate and analyse these data, the modelling was performed from within a spreadsheet. This was made possible by the scripting language Python and the modelling software PySCeS through an embedded Python interpreter available in the Gnumeric spreadsheet program. PMID:16986323

  7. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    PubMed Central

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence

  8. Testing and Application in Mission Critical Settings and Transmission, Siting, and Metrics Models Research

    SciTech Connect

    Rich Sedano-Regulatory Assistance Project; Mariana Uhrlaub

    2006-10-31

    The Distributed Generation: Testing and Application in Mission Critical Settings and Transmission, Siting, and Metrics Models Research grant has been in place for several years and has accomplished all the objectives and deliverables that were originally set forth in the proposal. The National Association of State Energy Officials (NASEO), the City of Portland, OR, Bureau of Environmental Services and the Regulatory Assistance Project (RAP) have been able to successfully monitor and evaluate DG applications in a wastewater treatment plant environment, develop a metrics model for new voluntary DG guidelines that could be used as a prototype, and through outreach and education venues provide the results of these projects to state, professional, and national organizations and their members addressing similar issues. This project had three specific tasks associated with it: (1) Field Research and Testing; (2) Metrics/Verification Model for DG Guidelines; and (3) Northeastern Transmission/Siting Data Research. Each task had its own set of challenges and lessons learned but overall there were many successes that will serve as learning opportunities in these technology areas. Continuing to share the outcomes of this project with a wider audience will be beneficial for all those involved in distributed generation and combined heat and power projects.

  9. Adaptive global training set selection for spectral estimation of printed inks using reflectance modeling.

    PubMed

    Eckhard, Timo; Valero, Eva M; Hernández-Andrés, Javier; Schnitzlein, Markus

    2014-02-01

    The performance of learning-based spectral estimation is greatly influenced by the set of training samples selected to create the reconstruction model. Training sample selection schemes can be categorized into global and local approaches. Most of the previously proposed global training schemes aim to reduce the number of training samples, or a selection of representative samples, to maintain the generality of the training dataset. This work relates to printed ink reflectance estimation for quality assessment in in-line print inspection. We propose what we believe is a novel global training scheme that models a large population of realistic printable ink reflectances. Based on this dataset, we used a recursive top-down algorithm to reject clusters of training samples that do not enhance the performance of a linear least-square regression (pseudoinverse-based estimation) process. A set of experiments with real camera response data of a 12-channel multispectral camera system illustrate the advantages of this selection scheme over some other state-of-the-art algorithms. For our data, our method of global training sample selection outperforms other methods in terms of estimation quality and, more importantly, can quickly handle large datasets. Furthermore, we show that reflectance modeling is a reasonable, convenient tool to generate large training sets for print inspection applications. PMID:24514188

  10. Northern Russian chironomid-based modern summer temperature data set and inference models

    NASA Astrophysics Data System (ADS)

    Nazarova, Larisa; Self, Angela E.; Brooks, Stephen J.; van Hardenbroek, Maarten; Herzschuh, Ulrike; Diekmann, Bernhard

    2015-11-01

    West and East Siberian data sets and 55 new sites were merged based on the high taxonomic similarity, and the strong relationship between mean July air temperature and the distribution of chironomid taxa in both data sets compared with other environmental parameters. Multivariate statistical analysis of chironomid and environmental data from the combined data set consisting of 268 lakes, located in northern Russia, suggests that mean July air temperature explains the greatest amount of variance in chironomid distribution compared with other measured variables (latitude, longitude, altitude, water depth, lake surface area, pH, conductivity, mean January air temperature, mean July air temperature, and continentality). We established two robust inference models to reconstruct mean summer air temperatures from subfossil chironomids based on ecological and geographical approaches. The North Russian 2-component WA-PLS model (RMSEPJack = 1.35 °C, rJack2 = 0.87) can be recommended for application in palaeoclimatic studies in northern Russia. Based on distinctive chironomid fauna and climatic regimes of Kamchatka the Far East 2-component WAPLS model (RMSEPJack = 1.3 °C, rJack2 = 0.81) has potentially better applicability in Kamchatka.

  11. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  12. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  13. Terminator field-aligned current system: A new finding from model-assimilated data set (MADS)

    NASA Astrophysics Data System (ADS)

    Zhu, L.; Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Gardner, L. C.; Eccles, J. V.; Rice, D.

    2013-12-01

    Physics-based data assimilation models have been recognized by the space science community as the most accurate approach to specify and forecast the space weather of the solar-terrestrial environment. The model-assimilated data sets (MADS) produced by these models constitute an internally consistent time series of global three-dimensional fields whose accuracy can be estimated. Because of its internal consistency of physics and completeness of descriptions on the status of global systems, the MADS has also been a powerful tool to identify the systematic errors in measurements, reveal the missing physics in physical models, and discover the important dynamical physical processes that are inadequately observed or missed by measurements due to observational limitations. In the past years, we developed a data assimilation model for the high-latitude ionospheric plasma dynamics and electrodynamics. With a set of physical models, an ensemble Kalman filter, and the ingestion of data from multiple observations, the data assimilation model can produce a self-consistent time-series of the complete descriptions of the global high-latitude ionosphere, which includes the convection electric field, horizontal and field-aligned currents, conductivity, as well as 3-D plasma densities and temperatures, In this presentation, we will show a new field-aligned current system discovered from the analysis of the MADS produced by our data assimilation model. This new current system appears and develops near the ionospheric terminator. The dynamical features of this current system will be described and its connection to the active role of the ionosphere in the M-I coupling will be discussed.

  14. Data, models, and views: towards integration of diverse numerical model components and data sets for scientific and public dissemination

    NASA Astrophysics Data System (ADS)

    Hofmeister, Richard; Lemmen, Carsten; Nasermoaddeli, Hassan; Klingbeil, Knut; Wirtz, Kai

    2015-04-01

    Data and models for describing coastal systems span a diversity of disciplines, communities, ecosystems, regions and techniques. Previous attempts of unifying data exchange, coupling interfaces, or metadata information have not been successful. We introduce the new Modular System for Shelves and Coasts (MOSSCO, http://www.mossco.de), a novel coupling framework that enables the integration of a diverse array of models and data from different disciplines relating to coastal research. In the MOSSCO concept, the integrating framework imposes very few restrictions on contributed data or models; in fact, there is no distinction made between data and models. The few requirements are: (1) principle coupleability, i.e. access to I/O and timing information in submodels, which has recently been referred to as the Basic Model Interface (BMI) (2) open source/open data access and licencing and (3) communication of metadata, such as spatiotemporal information, naming conventions, and physical units. These requirements suffice to integrate different models and data sets into the MOSSCO infrastructure and subsequently built a modular integrated modeling tool that can span a diversity of processes and domains. We demonstrate how diverse coastal system constituents were integrated into this modular framework and how we deal with the diverging development of constituent data sets and models at external institutions. Finally, we show results from simulations with the fully coupled system using OGC WebServices in the WiMo geoportal (http://kofserver3.hzg.de/wimo), from where stakeholders can view the simulation results for further dissemination.

  15. Modeling Acquiescence in Measurement Models for Two Balanced Sets of Items.

    ERIC Educational Resources Information Center

    Billiet, Jaak B.; McClendon, McKee J.

    2000-01-01

    Studied the measurement of acquiescence in balanced scales using a structural equation modeling approach with subsamples of 986 and 992 from the same population of Belgian adults interviewed about ethnic prejudice. The strong relation in both populations of the latent style factor with a variable "sum of agreements" supports the idea that is…

  16. Use of an Anatomical Scalar to Control for Sex-Based Size Differences in Measures of Hyoid Excursion during Swallowing

    ERIC Educational Resources Information Center

    Molfenter, Sonja M.; Steele, Catriona M.

    2014-01-01

    Purpose: Traditional methods for measuring hyoid excursion from dynamic videofluoroscopy recordings involve calculating changes in position in absolute units (mm). This method shows a high degree of variability across studies but agreement that greater hyoid excursion occurs inmen than in women. Given that men are typically taller than women, the…

  17. Water Quality Modelling - Developing a Data Input Set Based on an Emission Inventory

    NASA Astrophysics Data System (ADS)

    Christoffels, E.

    2009-04-01

    To enable precise characterisation of the immission situations for watercourses, it is first necessary to characterise the emissions in the catchment area. The data required to yield useful information on emissions can be collected via monitoring (e.g. at waste water treatment plant outlet, run-off of surface waters, run-off of soil moisture content) and can also be generated as the result of running a suitable model (e.g. by sewer simulation modelling). The combined approach of monitoring and modelling permits development of an emission inventory. This inventory can be used as a data input set to run a water quality model for rivers which, when used in conjunction with valid methods of river monitoring (routine spot check program, online monitoring network, sediment studies), provides valuable information about the immission situation (immission inventory). It will be presented how the Erftverband, a water management association operating in the Erft river catchment in Germany, has established an emission inventory for the entire Erft basin. This inventory provides essential data input to run the water quality model of the German Water Association, generally known as the DWA Water Quality Model. It will be demonstrated that, using this inventory, the DWA Water Quality Model, applied to the Erft river basin as the Erft water quality model, constitutes a valuable tool in support of water management planning.

  18. The Carnian (Late Triassic) carbon isotope excursion: new insights from the terrestrial realm

    NASA Astrophysics Data System (ADS)

    Miller, Charlotte; Kürschner, Wolfram; Peterse, Francien; Baranyi, Viktoria; Reichart, Gert-Jan

    2016-04-01

    The geological record contains evidence for numerous pronounced perturbations in the global carbon cycle, some of which are associated with eruptions from large igneous provinces (LIP), and consequently, ocean acidification and mass extinction. In the Carnian (Late Triassic), evidence from sedimentology and fossil pollen points to a significant change in climate, resulting in biotic turnover: during a period termed the 'Carnian Pluvial Event' (CPE). Additionally, during the Carnian, large volumes of flood basalts were erupted from the Wrangellia LIP (western North America). Evidence from the marine realm suggests a fundamental relationship between the CPE, a global 'wet' period, and the injection of light carbon into the atmosphere from the LIP. Here we provide the first evidence from the terrestrial realm of a significant negative δ13C excursion through the CPE recorded in the sedimentary archive of the Wiscombe Park Borehole, Devon (UK). Both total organic matter and plant leaf waxes reflect a gradual carbon isotope excursion of ~‑5‰ during this time interval. Our data provides evidence for the global nature of this isotope excursion, supporting the hypothesis that the excursion was likely the result of an injection of light carbon into the atmosphere from the Wrangellia LIP.

  19. On the excursions of drifted Brownian motion and the successive passage times of Brownian motion

    NASA Astrophysics Data System (ADS)

    Abundo, Mario

    2016-09-01

    By using the law of the excursions of Brownian motion with drift, we find the distribution of the nth passage time of Brownian motion through a straight line S(t) = a + bt. In the special case when b = 0, we extend the result to a space-time transformation of Brownian motion.

  20. Multicultural Group Work on Field Excursions to Promote Student Teachers' Intercultural Competence

    ERIC Educational Resources Information Center

    Brendel, Nina; Aksit, Fisun; Aksit, Selahattin; Schrüfer, Gabriele

    2016-01-01

    As a response to the intercultural challenges of Geography Education, this study seeks to determine factors fostering intercultural competence of student teachers. Based on a one-week multicultural field excursion of eight German and eight Turkish students in Kayseri (Turkey) on Education for Sustainable Development, we used qualitative interviews…

  1. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging

    PubMed Central

    Li, Yusheng; Matej, Samuel; Metzler, Scott D.

    2014-01-01

    Purpose: Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. Methods: The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Results: Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The

  2. Early and middle Matuyama geomagnetic excursions recorded in the Chinese loess-paleosol sediments

    NASA Astrophysics Data System (ADS)

    Yang, Tianshui; Hyodo, Masayuki; Yang, Zhenyu; Ding, Lin; Fu, Jianli; Mishima, Toshiaki

    2007-07-01

    A detailed paleomagnetic and rock-magnetic investigation on the early and middle Matuyama loess-paleosol sediments has been carried out at the Baoji section, Shaanxi province, southern Chinese Loess Plateau. Our new magnetostatigraphy revises the position of the lower Olduvai boundary from L27 to S26. Seven shortlived geomagnetic excursions, tentatively named as E1, E2, E3, E4, E5, E6, and E7, have been recognized in the L13, S22, L26, L27, S29, and upper and middle parts of L32, respectively. Results of the anisotropy of low-field magnetic susceptibility (AMS) show that the studied loess-paleosol sediments retain the primary sedimentary fabric. Rock magnetic experiments reveal that the sediments from the excursional and polarity transitional intervals have the same magnetic characteristics as those from the surrounding normal and reversed polarity intervals. Assuming a constant accumulation rate between polarity boundaries, these seven excursions are estimated to be at about 1.11 Ma (E1), 1.58 Ma (E2), 1.92 Ma (E3), 2.11 Ma (E4), 2.25 Ma (E5), 2.35 Ma (E6), and 2.42 Ma (E7) Ma. The E1 and E2 in the middle Matuyama Chron can be correlated with the Punaruu and Stage 54 (Gilsa) excursions, respectively. The E4, E5, and E7 in the early Matuyama Chron can be correlated with the Réunion II, Réunion I, and cryptochron C2r.2r-1 (X-subchron), respectively. The E3 in the lower Olduvai subchron and E6 in the early Matuyama Chron have no comparable events. At present they can only be correlated with the anomalous directions observed in the Osaka Bay core (Biswas et al., 1999). Therefore, further investigations are necessary to support their global occurrences. The present result together with the two late Matuyama excursions dated at about 0.89 Ma and 0.92 Ma (Yang et al., 2004) show that the Baoji section yields at least nine Matuyama excursions which, along with the results of the study, suggests that eight excursions occur at 0.9-2.2 Ma (Channell et al., 2002), thereby

  3. Engaging students in research learning experiences through hydrology field excursions and short films

    NASA Astrophysics Data System (ADS)

    Ewen, Tracy; Seibert, Jan

    2015-04-01

    One of the best ways to engage students and instill enthusiasm for hydrology is to expose them to hands-on learning. A focus on hydrology field research can be used to develop context-rich and active learning, and help solidify idealized learning where students are introduced to individual processes through textbook examples, often neglecting process interactions and an appreciation for the complexity of the system. We introduced a field course where hydrological measurement techniques are used to study processes such as snow hydrology and runoff generation, while also introducing students to field research and design of their own field project. Additionally, we produced short films of each of these research-based field excursions, with in-house film expertise. These films present a short overview of field methods applied in alpine regions and will be used for our larger introductory hydrology courses, exposing students to field research at an early stage, and for outreach activities, including for potential high school students curious about hydrology. In the field course, students design a low-budget experiment with the aim of going through the different steps of a 'real' scientific project, from formulating the research question to presenting their results. During the field excursions, students make discharge measurements in several alpine streams with a salt tracer to better understand the spatial characteristics of an alpine catchment, where source waters originate and how they contribute to runoff generation. Soil moisture measurements taken by students in this field excursion were used to analyze spatial soil moisture patterns in the alpine catchment and subsequently used in a publication. Another field excursion repeats a published experiment, where preferential soil flow paths are studied using a tracer and compared to previously collected data. For each field excursion, observational data collected by the students is uploaded to an online database we

  4. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    PubMed Central

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-01-01

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies. PMID:26151207

  5. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    PubMed

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-01-01

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies. PMID:26151207

  6. Modeling the Formation Process of Grouping Stimuli Sets through Cortical Columns and Microcircuits to Feature Neurons

    PubMed Central

    Williamson, Adam

    2013-01-01

    A computational model of a self-structuring neuronal net is presented in which repetitively applied pattern sets induce the formation of cortical columns and microcircuits which decode distinct patterns after a learning phase. In a case study, it is demonstrated how specific neurons in a feature classifier layer become orientation selective if they receive bar patterns of different slopes from an input layer. The input layer is mapped and intertwined by self-evolving neuronal microcircuits to the feature classifier layer. In this topical overview, several models are discussed which indicate that the net formation converges in its functionality to a mathematical transform which maps the input pattern space to a feature representing output space. The self-learning of the mathematical transform is discussed and its implications are interpreted. Model assumptions are deduced which serve as a guide to apply model derived repetitive stimuli pattern sets to in vitro cultures of neuron ensembles to condition them to learn and execute a mathematical transform. PMID:24369455

  7. Beyond Maximum Independent Set: AN Extended Model for Point-Feature Label Placement

    NASA Astrophysics Data System (ADS)

    Haunert, Jan-Henrik; Wolff, Alexander

    2016-06-01

    Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in the map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label overlaps) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, interdependences between labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate for randomly generated instances.

  8. Modeling Neurovascular Coupling from Clustered Parameter Sets for Multimodal EEG-NIRS

    PubMed Central

    Talukdar, M. Tanveer; Frost, H. Robert; Diamond, Solomon G.

    2015-01-01

    Despite significant improvements in neuroimaging technologies and analysis methods, the fundamental relationship between local changes in cerebral hemodynamics and the underlying neural activity remains largely unknown. In this study, a data driven approach is proposed for modeling this neurovascular coupling relationship from simultaneously acquired electroencephalographic (EEG) and near-infrared spectroscopic (NIRS) data. The approach uses gamma transfer functions to map EEG spectral envelopes that reflect time-varying power variations in neural rhythms to hemodynamics measured with NIRS during median nerve stimulation. The approach is evaluated first with simulated EEG-NIRS data and then by applying the method to experimental EEG-NIRS data measured from 3 human subjects. Results from the experimental data indicate that the neurovascular coupling relationship can be modeled using multiple sets of gamma transfer functions. By applying cluster analysis, statistically significant parameter sets were found to predict NIRS hemodynamics from EEG spectral envelopes. All subjects were found to have significant clustered parameters (P < 0.05) for EEG-NIRS data fitted using gamma transfer functions. These results suggest that the use of gamma transfer functions followed by cluster analysis of the resulting parameter sets may provide insights into neurovascular coupling in human neuroimaging data. PMID:26089979

  9. Environmental forcing of terrestrial carbon isotope excursion amplification across five Eocene hyperthermals

    NASA Astrophysics Data System (ADS)

    Bowen, G. J.; Abels, H.

    2015-12-01

    Abrupt changes in the isotope composition of exogenic carbon pools accompany many major episodes of global change in the geologic record. The global expression of this change in substrates that reflect multiple carbon pools provides important evidence that many events reflect persistent, global redistribution of carbon between reduced and oxidized stocks. As the diversity of records documenting any event grows, however, discrepancies in the expression of carbon isotope change among substrates are almost always revealed. These differences in magnitude, pace, and pattern of change can complicate interpretations of global carbon redistribution, but under ideal circumstances can also provide additional information on changes in specific environmental and biogeochemical systems that accompanied the global events. Here we evaluate possible environmental influences on new terrestrial records of the negative carbon isotope excursions (CIEs) associated with multiple hyperthermals of the Early Eocene, which show a common pattern of amplified carbon isotope change in terrestrial paleosol carbonate records relative to that recorded in marine substrates. Scaling relationships between climate and carbon-cycle proxies suggest that that the climatic (temperature) impact of each event scaled proportionally with the magnitude of its marine CIE, likely implying that all events involved release of reduced carbon with a similar isotopic composition. Amplification of the terrestrial CIEs, however, does not scale with event magnitude, being proportionally less for the first, largest event (the PETM). We conduct a sensitivity test of a coupled plant-soil carbon isotope model to identify conditions that could account for the observed CIE scaling. At least two possibilities consistent with independent lines of evidence emerge: first, varying effects of pCO2 change on photosynthetic carbon isotope discrimination under changing background pCO2, and second, contrasting changes in regional

  10. Timing of Carbon isotope excursions during the late Triassic and early Jurassic

    NASA Astrophysics Data System (ADS)

    Yager, J. A.; West, A. J.; Corsetti, F. A.; Berelson, W.; Bottjer, D. J.; Rosas, S.

    2015-12-01

    The emplacement of the Central Atlantic Magmatic Province during the late Triassic and early Jurassic is implicated in the end-Triassic mass extinction and is associated with dramatic increases in atmospheric pCO2. Changes in the isotopic composition of CO2 as recorded on land and in the ocean have been observed in many sections worldwide, but the timing and causes of the changes are debated. Recent high-resolution ash bed dating (Schaltegger et al., 2008; Schoene et al., 2010; Guex et al., 2012; Wotzlaw et al., 2014) from a continuous Rhaetian-Hettangian section near Levanto, Peru, provide an opportunity to understand the duration of these carbon cycle disruptions, and ammonite biostratigraphy allows comparison to other sections. We measured % organic carbon and % inorganic carbon along with δ13Corganic and δ13Ccarbonate at the section near Levanto. We find a series of δ13Corganic excursions that are similar to those found in other Triassic-Jurassic successions, both from the Tethyan (St. Audrie's Bay, UK) and Panthalassic oceans (Kennecott Point, CAN), pointing to the global extent of these changes. At Levanto, we can identify a brief, initial positive carbon isotope excursion followed first by a sharp negative excursion that coincides with the last appearance of Triassic ammonites, and then a more extended positive carbon isotope excursion that extends into the initial Jurassic recovery. Using the ash bed dates from Levanto, we are able for the first time to estimate robustly the duration of each carbon isotope excursion across the Triassic-Jurassic interval. These estimates of durations aid in our understanding of timing and causes of carbon cycle perturbations associated with the emplacement of CAMP and its relation to mass extinction.

  11. Into the deep: the functionality of mesopelagic excursions by an oceanic apex predator.

    PubMed

    Howey, Lucy A; Tolentino, Emily R; Papastamatiou, Yannis P; Brooks, Edward J; Abercrombie, Debra L; Watanabe, Yuuki Y; Williams, Sean; Brooks, Annabelle; Chapman, Demian D; Jordan, Lance K B

    2016-08-01

    Comprehension of ecological processes in marine animals requires information regarding dynamic vertical habitat use. While many pelagic predators primarily associate with epipelagic waters, some species routinely dive beyond the deep scattering layer. Actuation for exploiting these aphotic habitats remains largely unknown. Recent telemetry data from oceanic whitetip sharks (Carcharhinus longimanus) in the Atlantic show a strong association with warm waters (>20°C) less than 200 m. Yet, individuals regularly exhibit excursions into the meso- and bathypelagic zone. In order to examine deep-diving behavior in oceanic whitetip sharks, we physically recovered 16 pop-up satellite archival tags and analyzed the high-resolution depth and temperature data. Diving behavior was evaluated in the context of plausible functional behavior hypotheses including interactive behaviors, energy conservation, thermoregulation, navigation, and foraging. Mesopelagic excursions (n = 610) occurred throughout the entire migratory circuit in all individuals, with no indication of site specificity. Six depth-versus-time descent and ascent profiles were identified. Descent profile shapes showed little association with examined environmental variables. Contrastingly, ascent profile shapes were related to environmental factors and appear to represent unique behavioral responses to abiotic conditions present at the dive apex. However, environmental conditions may not be the sole factors influencing ascents, as ascent mode may be linked to intentional behaviors. While dive functionality remains unconfirmed, our study suggests that mesopelagic excursions relate to active foraging behavior or navigation. Dive timing, prey constituents, and dive shape support foraging as the most viable hypothesis for mesopelagic excursions, indicating that the oceanic whitetip shark may regularly survey extreme environments (deep depths, low temperatures) as a foraging strategy. At the apex of these deep

  12. Radiocarbon variability during the Laschamp excursion (ca. 41 ka) based on Gulf of Mexico sediments

    NASA Astrophysics Data System (ADS)

    Williams, C. C.; Guilderson, T. P.

    2009-12-01

    The Laschamp excursion is a rapid (<2 ka) geomagnetic reversal approximately 41.25 ± 0.8 ka BP (based on the 2005 Greenland Ice Core Chronology (GICC05)) that is present in many terrestrial and marine sediment records. Due to changes in Earth’s magnetic field during geomagnetic excursions and reversals, atmospheric 14C production is variable and can subsequently make 14C dating and calendar year calibration difficult. Although paleointensity lows during the Laschamp interval are useful in correlating sediment and ice core records data to a common timescale, climate archives that lack these data require radiocarbon dating for temporal constraint. Yet, the 14C response during geomagnetic changes is not well understood. We present a new record of accelerator mass spectrometry (AMS) 14C dates combined with paleointensity data from core MD02-2551 from Orca Basin, Gulf of Mexico. High sedimentation rates (~60 cm/1000 yrs) allow for high-resolution sampling over the duration of the Laschamp interval. In this section of the core sediments are oxic and massive, consistent with a hemipelagic depositional environment. The comparison of paleogeomagnetic data to 14C ages during the Laschamp excursion allow us to further investigate the geochemical 14C signal during magnetic excursions. Results exhibit a plateau in 14C ages with high-frequency fluctuations superimposed during the Laschamp interval that supports the expectation of increased atmospheric 14C production during this geomagnetic excursion. We compare our results to a similar marine derived record from the oxic non-laminated interval of Cariaco Basin with a similar albeit slightly lower sedimentation rate (~30cm/1000 yrs).

  13. Cost accounting models used for price-setting of health services: an international review.

    PubMed

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals. PMID:25082465

  14. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  15. The Chronic Care Model and Diabetes Management in US Primary Care Settings: A Systematic Review

    PubMed Central

    Stellefson, Michael; Stopka, Christine

    2013-01-01

    Introduction The Chronic Care Model (CCM) uses a systematic approach to restructuring medical care to create partnerships between health systems and communities. The objective of this study was to describe how researchers have applied CCM in US primary care settings to provide care for people who have diabetes and to describe outcomes of CCM implementation. Methods We conducted a literature review by using the Cochrane database of systematic reviews, CINAHL, and Health Source: Nursing/Academic Edition and the following search terms: “chronic care model” (and) “diabet*.” We included articles published between January 1999 and October 2011. We summarized details on CCM application and health outcomes for 16 studies. Results The 16 studies included various study designs, including 9 randomized controlled trials, and settings, including academic-affiliated primary care practices and private practices. We found evidence that CCM approaches have been effective in managing diabetes in US primary care settings. Organizational leaders in health care systems initiated system-level reorganizations that improved the coordination of diabetes care. Disease registries and electronic medical records were used to establish patient-centered goals, monitor patient progress, and identify lapses in care. Primary care physicians (PCPs) were trained to deliver evidence-based care, and PCP office–based diabetes self-management education improved patient outcomes. Only 7 studies described strategies for addressing community resources and policies. Conclusion CCM is being used for diabetes care in US primary care settings, and positive outcomes have been reported. Future research on integration of CCM into primary care settings for diabetes management should measure diabetes process indicators, such as self-efficacy for disease management and clinical decision making. PMID:23428085

  16. A universal surface complexation framework for modeling proton binding onto bacterial surfaces in geologic settings

    USGS Publications Warehouse

    Borrok, D.; Turner, B.F.; Fein, J.B.

    2005-01-01

    Adsorption onto bacterial cell walls can significantly affect the speciation and mobility of aqueous metal cations in many geologic settings. However, a unified thermodynamic framework for describing bacterial adsorption reactions does not exist. This problem originates from the numerous approaches that have been chosen for modeling bacterial surface protonation reactions. In this study, we compile all currently available potentiometric titration datasets for individual bacterial species, bacterial consortia, and bacterial cell wall components. Using a consistent, four discrete site, non-electrostatic surface complexation model, we determine total functional group site densities for all suitable datasets, and present an averaged set of 'universal' thermodynamic proton binding and site density parameters for modeling bacterial adsorption reactions in geologic systems. Modeling results demonstrate that the total concentrations of proton-active functional group sites for the 36 bacterial species and consortia tested are remarkably similar, averaging 3.2 ?? 1.0 (1??) ?? 10-4 moles/wet gram. Examination of the uncertainties involved in the development of proton-binding modeling parameters suggests that ignoring factors such as bacterial species, ionic strength, temperature, and growth conditions introduces relatively small error compared to the unavoidable uncertainty associated with the determination of cell abundances in realistic geologic systems. Hence, we propose that reasonable estimates of the extent of bacterial cell wall deprotonation can be made using averaged thermodynamic modeling parameters from all of the experiments that are considered in this study, regardless of bacterial species used, ionic strength, temperature, or growth condition of the experiment. The average site densities for the four discrete sites are 1.1 ?? 0.7 ?? 10-4, 9.1 ?? 3.8 ?? 10-5, 5.3 ?? 2.1 ?? 10-5, and 6.6 ?? 3.0 ?? 10-5 moles/wet gram bacteria for the sites with pKa values of 3

  17. Setting up a groundwater recharge model for an arid karst system using time lapse camera data

    NASA Astrophysics Data System (ADS)

    Schulz, Stephan; de Rooij, Gerrit H.; Michelsen, Nils; Rausch, Randolf; Siebert, Christian; Schüth, Christoph; Merz, Ralf

    2015-04-01

    Groundwater is the principal water resource in most dryland areas. Therefore, its replenishment rate is of great importance for water management. The amount of groundwater recharge depends on the climatic conditions, but also on the geological conditions, soil properties and vegetation. In dryland areas, outcrops of karst aquifers often receive enhanced recharge rates compared to other geological settings. Especially in areas with exposed karst features like sinkholes or open shafts rainfall accumulates in channels and discharges directly into the aquifer. Using the example of the As Sulb plateau in Saudi Arabia this study introduces a cost-effective and robust method for recharge monitoring and modelling in karst outcrops. The measurement of discharge of a small catchment (4.0 x 104 m2) into a sinkhole, and hence the direct recharge into the aquifer, was carried out with a time lapse camera observing a v-notch weir. During the monitoring period of two rainy seasons (autumn 2012 to spring 2014) four recharge events were recorded. Afterwards, recharge data as well as proxy data about the drying of the sediment cover are used to set up a conceptual water balance model. This model was run for 17 years (1971 to 1986 and 2012 to 2014). Simulation results show highly variable seasonal recharge-precipitation-ratios, which underlines the nonlinearity between recharge and precipitation in dryland areas. Besides the amount of precipitation this ratio is strongly influenced by the interannual distribution of rainfall events.

  18. The two-component model of memory development, and its potential implications for educational settings.

    PubMed

    Sander, Myriam C; Werkle-Bergner, Markus; Gerjets, Peter; Shing, Yee Lee; Lindenberger, Ulman

    2012-02-15

    We recently introduced a two-component model of the mechanisms underlying age differences in memory functioning across the lifespan. According to this model, memory performance is based on associative and strategic components. The associative component is relatively mature by middle childhood, whereas the strategic component shows a maturational lag and continues to develop until young adulthood. Focusing on work from our own lab, we review studies from the domains of episodic and working memory informed by this model, and discuss their potential implications for educational settings. The episodic memory studies uncover the latent potential of the associative component in childhood by documenting children's ability to greatly improve their memory performance following mnemonic instruction and training. The studies on working memory also point to an immature strategic component in children whose operation is enhanced under supportive conditions. Educational settings may aim at fostering the interplay between associative and strategic components. We explore possible routes towards this goal by linking our findings to recent trends in research on instructional design. PMID:22682913

  19. A New Set of Focal Mechanisms and a Geodynamic Model for the Eastern Tennessee Seismic Zone

    NASA Astrophysics Data System (ADS)

    Cooley, M. T.; Powell, C. A.; Choi, E.

    2014-12-01

    We present a new set of 26 focal mechanisms for the eastern Tennessee seismic zone (ETSZ) and discuss the implications for regional uplift. The mechanisms are for earthquakes with magnitudes 2.5 and greater occurring after 1999. The ETSZ is the second largest seismic zone in the central and eastern US and the seismicity is attributed to reactivation of a major Grenville-age shear zone. P- and S- wave velocity models, the distribution of hypocenters, focal mechanisms, and potential field anomalies suggest the presence of a basement shear zone. The new focal mechanism solutions supplement and are consistent with a previously calculated set of 26 focal mechanisms for the period 1983-1993. Focal mechanisms fall into two groups. The first group shows strike-slip motion on steeply dipping nodal planes striking N-S/E-W and NE-SW/NW-SE. Mechanisms in the second group display primarily dip-slip motion and are constrained geographically to the southern portion of the seismic zone. Events in the second group are among the shallowest in the dataset (8-12 km). We are developing a geodynamic model of the regional structure to examine the stress regime, which may be changing with depth. This model will be used to determine a possible relationship between the localized normal faulting and previously established recent regional uplift.

  20. IO strategies and data services for petascale data sets from a global cloud resolving model

    SciTech Connect

    Schuchardt, Karen L.; Palmer, Bruce J.; Daily, Jeff; Elsethagen, Todd O.; Koontz, Annette S.

    2007-12-01

    Global cloud resolving models at 4km resolutions or less create significant challenges in generation of simulation data, data storage, data management, and post-simulation analysis and visualization. To support efficient model output as well as data analysis, new models for IO and data organization must be evaluated. The model we are supporting, the Global Cloud Resolving Model being developed at Colorado State University, uses a geodesic grid. The non-monotonic nature of the grid's coordinate variables requires enhancements to existing data processing tools and community standards for describing and manipulating grids. The resolution, size and extent of the data suggest the need for parallel analysis tools and allow for the possibility of new techniques in data mining, filtering and comparison to observations. We describe the challenges posed by various aspects of data generation, management, and analysis, our work exploring IO strategies for the model, and a preliminary architecture, web portal, and tool enhancements which, when complete, will enable broad community access to the data sets in a way that is familiar to the community.

  1. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    PubMed Central

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  2. Analysis of root growth from a phenotyping data set using a density-based model.

    PubMed

    Kalogiros, Dimitris I; Adu, Michael O; White, Philip J; Broadley, Martin R; Draye, Xavier; Ptashnyk, Mariya; Bengough, A Glyn; Dupuy, Lionel X

    2016-02-01

    Major research efforts are targeting the improved performance of root systems for more efficient use of water and nutrients by crops. However, characterizing root system architecture (RSA) is challenging, because roots are difficult objects to observe and analyse. A model-based analysis of RSA traits from phenotyping image data is presented. The model can successfully back-calculate growth parameters without the need to measure individual roots. The mathematical model uses partial differential equations to describe root system development. Methods based on kernel estimators were used to quantify root density distributions from experimental image data, and different optimization approaches to parameterize the model were tested. The model was tested on root images of a set of 89 Brassica rapa L. individuals of the same genotype grown for 14 d after sowing on blue filter paper. Optimized root growth parameters enabled the final (modelled) length of the main root axes to be matched within 1% of their mean values observed in experiments. Parameterized values for elongation rates were within ±4% of the values measured directly on images. Future work should investigate the time dependency of growth parameters using time-lapse image data. The approach is a potentially powerful quantitative technique for identifying crop genotypes with more efficient root systems, using (even incomplete) data from high-throughput phenotyping systems. PMID:26880747

  3. a Bayesian Synthesis of Predictions from Different Models for Setting Water Quality Criteria

    NASA Astrophysics Data System (ADS)

    Arhonditsis, G. B.; Ecological Modelling Laboratory

    2011-12-01

    Skeptical views of the scientific value of modelling argue that there is no true model of an ecological system, but rather several adequate descriptions of different conceptual basis and structure. In this regard, rather than picking the single "best-fit" model to predict future system responses, we can use Bayesian model averaging to synthesize the forecasts from different models. Hence, by acknowledging that models from different areas of the complexity spectrum have different strengths and weaknesses, the Bayesian model averaging is an appealing approach to improve the predictive capacity and to overcome the ambiguity surrounding the model selection or the risk of basing ecological forecasts on a single model. Our study addresses this question using a complex ecological model, developed by Ramin et al. (2011; Environ Modell Softw 26, 337-353) to guide the water quality criteria setting process in the Hamilton Harbour (Ontario, Canada), along with a simpler plankton model that considers the interplay among phosphate, detritus, and generic phytoplankton and zooplankton state variables. This simple approach is more easily subjected to detailed sensitivity analysis and also has the advantage of fewer unconstrained parameters. Using Markov Chain Monte Carlo simulations, we calculate the relative mean standard error to assess the posterior support of the two models from the existing data. Predictions from the two models are then combined using the respective standard error estimates as weights in a weighted model average. The model averaging approach is used to examine the robustness of predictive statements made from our earlier work regarding the response of Hamilton Harbour to the different nutrient loading reduction strategies. The two eutrophication models are then used in conjunction with the SPAtially Referenced Regressions On Watershed attributes (SPARROW) watershed model. The Bayesian nature of our work is used: (i) to alleviate problems of spatiotemporal

  4. Dynamical models for a spacecraft idealized as a set of multi-hinged rigid bodies

    NASA Technical Reports Server (NTRS)

    Larson, V.

    1973-01-01

    A brief description is presented of a canonical set of equations which governs the behavior of an n-body spacecraft. General results are given for the case in which the spacecraft is modeled in terms of n rigid bodies connected by dissipative elastic joints. The final equations are free from constraint torques and involve only r variables (r is the number of degrees of freedom of the system). An advantage which accompanies the elimination of the constraint torques is a decrease in the computer run time (especially when n is large).

  5. Power excursion analysis for BWR`s at high burnup

    SciTech Connect

    Diamond, D.J.; Neymoith, L.; Kohut, P.

    1996-03-01

    A study has been undertaken to determine the fuel enthalpy during a rod drop accident and during two thermal-hydraulic transients. The objective was to understand the consequences to high burnup fuel and the sources of uncertainty in the calculations. The analysis was done with RAMONA-4B, a computer code that models the neutron kinetics throughout the core along with the thermal-hydraulics in the core, vessel, and steamline. The results showed that the maximum fuel enthalpy in high burnup fuel will be affected by core design, initial conditions, and modeling assumptions. The important parameters in each of these categories are discussed in the paper.

  6. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1994-01-01

    Envision is an interactive environment that provides researchers in the earth sciences convenient ways to manage, browse, and visualize large observed or model data sets. Its main features are support for the netCDF and HDF file formats, an easy to use X/Motif user interface, a client-server configuration, and portability to many UNIX workstations. The Envision package also provides new ways to view and change metadata in a set of data files. It permits a scientist to conveniently and efficiently manage large data sets consisting of many data files. It also provides links to popular visualization tools so that data can be quickly browsed. Envision is a public domain package, freely available to the scientific community. Envision software (binaries and source code) and documentation can be obtained from either of these servers: ftp://vista.atmos.uiuc.edu/pub/envision/ and ftp://csrp.tamu.edu/pub/envision/. Detailed descriptions of Envision capabilities and operations can be found in the User's Guide and Reference Manuals distributed with Envision software.

  7. A Comparison of Hourly Typhoon Rainfall Forecasting Models Based on Support Vector Machines and Random Forests with Different Predictor Sets

    NASA Astrophysics Data System (ADS)

    Lin, Kun-Hsiang; Tseng, Hung-Wei; Kuo, Chen-Min; Yang, Tao-Chang; Yu, Pao-Shan

    2016-04-01

    Typhoons with heavy rainfall and strong wind often cause severe floods and losses in Taiwan, which motivates the development of rainfall forecasting models as part of an early warning system. Thus, this study aims to develop rainfall forecasting models based on two machine learning methods, support vector machines (SVMs) and random forests (RFs), and investigate the performances of the models with different predictor sets for searching the optimal predictor set in forecasting. Four predictor sets were used: (1) antecedent rainfalls, (2) antecedent rainfalls and typhoon characteristics, (3) antecedent rainfalls and meteorological factors, and (4) antecedent rainfalls, typhoon characteristics and meteorological factors to construct for 1- to 6-hour ahead rainfall forecasting. An application to three rainfall stations in Yilan River basin, northeastern Taiwan, was conducted. Firstly, the performance of the SVMs-based forecasting model with predictor set #1 was analyzed. The results show that the accuracy of the models for 2- to 6-hour ahead forecasting decrease rapidly as compared to the accuracy of the model for 1-hour ahead forecasting which is acceptable. For improving the model performance, each predictor set was further examined in the SVMs-based forecasting model. The results reveal that the SVMs-based model using predictor set #4 as input variables performs better than the other sets and a significant improvement of model performance is found especially for the long lead time forecasting. Lastly, the performance of the SVMs-based model using predictor set #4 as input variables was compared with the performance of the RFs-based model using predictor set #4 as input variables. It is found that the RFs-based model is superior to the SVMs-based model in hourly typhoon rainfall forecasting. Keywords: hourly typhoon rainfall forecasting, predictor selection, support vector machines, random forests

  8. Tree-Ring Proxies of Hydroclimate Variability in the Great Lakes Region during Cold Excursions Back to 15ka

    NASA Astrophysics Data System (ADS)

    Panyushkina, I. P.; Leavitt, S. W.

    2014-12-01

    A decade-long investigation of subfossil wood buried in glacio-fluvial, fluvial and lacustrine deposits from the U.S. Great Lakes region has resulted in a Great Lakes tree-ring network (GLTRN) comprising 47 sites dated from ca. 15 ka to 3ka. The GLTRN provides high-resolution proxies for exploration of local and regional responses to hydroclimate change at inter-annual scales during the transition from the Late Pleistocene to the Holocene. Classification of radiometric ages of GLTRN wood with relative cumulative-probability function delineates intervals and importance of hydrological changes in time and space. The overwhelming majority of wood burial events correlate with generally cold climate excursions. Forest-stand deterioration and tree mortality events at the studied sites are demonstrated to result from flooding, via river aggradation (identifying occurrence of extreme hydrologic events), rise of water table, or lake inundation. To better evaluate the special patterns of hydrological change back to 15ka, we developed four floating d13C chronologies from spruce tree rings. The length of these tree-ring proxy series that capture high-frequency moisture variability of the Great Lakes area ranges from 120 to 250 years. Our data indicate progressive wet intervals during the cold excursions precisely dated with 14C tree-ring wiggles at 13.7ka, 12.1ka, and 11.3ka that fall in the Bølling-Allerød and Pre-Boreal Interstadials, and Younger Dryas Stadial. The inter-annual and decadal variability of tree-ring moisture proxies are similar across the studied locations and time intervals. Such coherence of respective proxies may result from both local ecological stability of spruce communities or regional response to a common source of moisture at the studied time intervals and locations. This study demonstrates a potential of GLTRN proxies for modeling hydroclimatic changes at the North American continent back 15 ka.

  9. The carbon star adventure: modelling atmospheres of a set of C-rich AGB stars

    NASA Astrophysics Data System (ADS)

    Rau, Gioia; Paladini, Claudia; Hron, Josef; Aringer, Bernard; Erikssonn, Kjell; Groenewegen, Martin

    2015-08-01

    We study the atmospheres of a set of carbon rich AGB stars to improve our understanding of the dynamic processes happening in there.For the first time we compare in a systematic way spectrometric, photometric and mid-infrared (VLTI/MIDI) interferometric measurements with different type of model atmospheres: (1) hydrostatic models + MOD-dusty models (Groenewegen, 2012) added a posteriori; (2) self-consistent dynamic model atmospheres (Eriksson et al. 2014). These allow to interpret in a coherent way the dynamic behavior of gas and dust. In addition, the geometric model fitting tool for interferometric data GEM-FIND is applied to carry out a first interpretation of the structural environment of the stars.The results underline that the joint use of different kind of observations, as photometry, spectroscopy and interferometry, is essential for understanding and modeling the atmosphere of pulsating C-rich AGB stars. For our first target, the carbon-rich Mira star RU Vir, the dynamic model atmospheres fit well the ISO/SWS spectra in the wavelength range λ = [2.9, 13.0] μm. However, the object turned out to be “peculiar”: we notice a discrepancy in the visible part of the SED, and in the visibilities. Possible causes are intra/inter-cycle variations in the dynamic model atmospheres, and an eventual presence of a companion star and/or disk or clumps in the atmosphere of RU Vir (Rau et al. subm.). Results on further targets will also be presented.The increased sample of C-rich stars of this work provides crucial constraints for the atmospheric structure and the formation of SiC. Moreover the second generation VLTI instrument MATISSE will be a perfect tool to detect and study asymmetries, as it will allow interferometric imaging in the L, M, and N bands.

  10. Defining the optimal animal model for translational research using gene set enrichment analysis.

    PubMed

    Weidner, Christopher; Steinfath, Matthias; Opitz, Elisa; Oelgeschläger, Michael; Schönfelder, Gilbert

    2016-01-01

    The mouse is the main model organism used to study the functions of human genes because most biological processes in the mouse are highly conserved in humans. Recent reports that compared identical transcriptomic datasets of human inflammatory diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. To reduce susceptibility to biased interpretation, all genes of interest for the biological question under investigation should be considered. Thus, standardized approaches for systematic data analysis are needed. We analyzed the same datasets using gene set enrichment analysis focusing on pathways assigned to inflammatory processes in either humans or mice. The analyses revealed a moderate overlap between all human and mouse datasets, with average positive and negative predictive values of 48 and 57% significant correlations. Subgroups of the septic mouse models (i.e., Staphylococcus aureus injection) correlated very well with most human studies. These findings support the applicability of targeted strategies to identify the optimal animal model and protocol to improve the success of translational research. PMID:27311961

  11. Modeling the climate impact of Southern Hemisphere ozone depletion: The importance of the ozone data set

    NASA Astrophysics Data System (ADS)

    Young, P. J.; Davis, S. M.; Hassler, B.; Solomon, S.; Rosenlof, K. H.

    2014-12-01

    The ozone hole is an important driver of recent Southern Hemisphere (SH) climate change, and capturing these changes is a goal of climate modeling. Most climate models are driven by off-line ozone data sets. Previous studies have shown that there is a substantial range in estimates of SH ozone depletion, but the implications of this range have not been examined systematically. We use a climate model to evaluate the difference between using the ozone forcing (Stratospheric Processes and their Role in Climate (SPARC)) used by many Intergovernmental Panel on Climate Change Fifth Assessment Report (Coupled Model Intercomparison Project) models and one at the upper end of the observed depletion estimates (Binary Database of Profiles (BDBP)). In the stratosphere, we find that austral spring/summer polar cap cooling, geopotential height decreases, and zonal wind increases in the BDBP simulations are all doubled compared to the SPARC simulations, while tropospheric responses are 20-100% larger. These results are important for studies attempting to diagnose the climate fingerprints of ozone depletion.

  12. Data Model for Astronomical DataSet Characterisation Version 1.13

    NASA Astrophysics Data System (ADS)

    Louys, Mireille; Richards, Anita; Bonnarel, François; Micol, Alberto; Chilingarian, Igor; McDowell, Jonathan; IVOA Data Model Working Group; Louys, Mireille; Richards, Anita; Bonnarel, François; Micol, Alberto; Chilingarian, Igor; McDowell, Jonathan

    2008-03-01

    This document defines the high level metadata necessary to describe the physical parameter space of observed or simulated astronomical data sets, such as 2D-images, data cubes, X-ray event lists, IFU data, etc. The Characterisation data model is an abstraction which can be used to derive a structured description of any relevant data and thus to facilitate its discovery and scientific interpretation. The model aims at facilitating the manipulation of heterogeneous data in any VO framework or portal. A VO Characterisation instance can include descriptions of the data axes, the range of coordinates covered by the data, and details of the data sampling and resolution on each axis. These descriptions should be in terms of physical variables, independent of instrumental signatures as far as possible. Implementations of this model has been described in the IVOA Note available at: http://www.ivoa.net/Documents/Notes/ImplemtationCharacDM/ImplementationCharacterisation-20070813.pdf Utypes derived from this version of the UML model are listed and commented in the following IVOA Note: http://www.ivoa.net/Documents/Notes/UTypeListCharacterisationDM/UtypeListCharacterisationDM-20070625.pdf An XML schema has been build up from the UML model and is available at: http://www.ivoa.net/xml/Characterisation/Characterisation-v1.11.xsd

  13. European air quality modelled by CAMx including the volatility basis set scheme

    NASA Astrophysics Data System (ADS)

    Ciarelli, G.; Aksoyoglu, S.; Crippa, M.; Jimenez, J. L.; Nemitz, E.; Sellegri, K.; Äijälä, M.; Carbone, S.; Mohr, C.; O'Dowd, C.; Poulain, L.; Baltensperger, U.; Prévôt, A. S. H.

    2015-12-01

    Four periods of EMEP (European Monitoring and Evaluation Programme) intensive measurement campaigns (June 2006, January 2007, September-October 2008 and February-March 2009) were modelled using the regional air quality model CAMx with VBS (Volatility Basis Set) approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February-March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosols (OA). Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database Airbase. Sulfur dioxide (SO2) and ozone (O3) were found to be overestimated for all the four periods with O3 having the largest mean bias during June 2006 and January-February 2007 periods (8.93 and 12.30 ppb mean biases, respectively). In contrast, nitrogen dioxide (NO2) and carbon monoxide (CO) were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 very well for all the four periods with average biases ranging from -2.13 to 1.04 μg m-3. Comparisons with AMS (Aerosol Mass Spectrometer) measurements at different sites in Europe during February-March 2009, showed that in general the model over-predicts the inorganic aerosol fraction and under-predicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of volatility basis set scheme (VBS) on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February-March 2009 the chamber-case reduced the total OA concentrations by about 43 % on average. On the other hand, a test based on ambient measurement data increased OA concentrations by about 47 % for the same period bringing model

  14. A moist Boussinesq shallow water equations set for testing atmospheric models

    SciTech Connect

    Zerroukat, M. Allen, T.

    2015-06-01

    The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allow the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact

  15. Modeling and predicting optimal treatment scheduling between the antiangiogenic drug sunitinib and irinotecan in preclinical settings.

    PubMed

    Wilson, S; Tod, M; Ouerdani, A; Emde, A; Yarden, Y; Adda Berkane, A; Kassour, S; Wei, M X; Freyer, G; You, B; Grenier, E; Ribba, B

    2015-12-01

    We present a system of nonlinear ordinary differential equations used to quantify the complex dynamics of the interactions between tumor growth, vasculature generation, and antiangiogenic treatment. The primary dataset consists of longitudinal tumor size measurements (1,371 total observations) in 105 colorectal tumor-bearing mice. Mice received single or combination administration of sunitinib, an antiangiogenic agent, and/or irinotecan, a cytotoxic agent. Depending on the dataset, parameter estimation was performed either using a mixed-effect approach or by nonlinear least squares. Through a log-likelihood ratio test, we conclude that there is a potential synergistic interaction between sunitinib when administered in combination with irinotecan in preclinical settings. Model simulations were then compared to data from a follow-up preclinical experiment. We conclude that the model has predictive value in identifying the therapeutic window in which the timing between the administrations of these two drugs is most effective. PMID:26904386

  16. Generating extreme weather event sets from very large ensembles of regional climate models

    NASA Astrophysics Data System (ADS)

    Massey, Neil; Guillod, Benoit; Otto, Friederike; Allen, Myles; Jones, Richard; Hall, Jim

    2015-04-01

    Generating extreme weather event sets from very large ensembles of regional climate models Neil Massey, Benoit P. Guillod, Friederike E. L. Otto, Myles R. Allen, Richard Jones, Jim W. Hall Environmental Change Institute, University of Oxford, Oxford, UK Extreme events can have large impacts on societies and are therefore being increasingly studied. In particular, climate change is expected to impact the frequency and intensity of these events. However, a major limitation when investigating extreme weather events is that, by definition, only few events are present in observations. A way to overcome this issue it to use large ensembles of model simulations. Using the volunteer distributed computing (VDC) infrastructure of weather@home [1], we run a very large number (10'000s) of RCM simulations over the European domain at a resolution of 25km, with an improved land-surface scheme, nested within a free-running GCM. Using VDC allows many thousands of climate model runs to be computed. Using observations for the GCM boundary forcings we can run historical "hindcast" simulations over the past 100 to 150 years. This allows us, due to the chaotic variability of the atmosphere, to ascertain how likely an extreme event was, given the boundary forcings, and to derive synthetic event sets. The events in these sets did not actually occur in the observed record but could have occurred given the boundary forcings, with an associated probability. The event sets contain time-series of fields of meteorological variables that allow impact modellers to assess the loss the event would incur. Projections of events into the future are achieved by modelling projections of the sea-surface temperature (SST) and sea-ice boundary forcings, by combining the variability of the SST in the observed record with a range of warming signals derived from the varying responses of SSTs in the CMIP5 ensemble to elevated greenhouse gas (GHG) emissions in three RCP scenarios. Simulating the future with a

  17. A comparison of foetal SAR in three sets of pregnant female models.

    PubMed

    Dimbylow, Peter J; Nagaoka, Tomoaki; Xu, X George

    2009-05-01

    This paper compares the foetal SAR in the HPA hybrid mathematical phantoms with the 26-week foetal model developed at the National Institute of Information and Communications Technology, Tokyo, and the set of 13-, 26- and 38-week boundary representation models produced at Rensselaer Polytechnic Institute. FDTD calculations are performed at a resolution of 2 mm for a plane wave with a vertically aligned electric field incident upon the body from the front, back and two sides from 20 MHz to 3 GHz under isolated conditions. The external electric field values required to produce the ICNIRP public exposure localized restriction of 2 W kg(-1) when averaged over 10 g of the foetus are compared with the ICNIRP reference levels. PMID:19369706

  18. Developing Staffing Models to Support Population Health Management And Quality Oucomes in Ambulatory Care Settings.

    PubMed

    Haas, Sheila A; Vlasses, Frances; Havey, Julia

    2016-01-01

    There are multiple demands and challenges inherent in establishing staffing models in ambulatory heath care settings today. If health care administrators establish a supportive physical and interpersonal health care environment, and develop high-performing interprofessional teams and staffing models and electronic documentation systems that track performance, patients will have more opportunities to receive safe, high-quality evidence-based care that encourages patient participation in decision making, as well as provision of their care. The health care organization must be aligned and responsive to the community within which it resides, fully invested in population health management, and continuously scanning the environment for competitive, regulatory, and external environmental risks. All of these challenges require highly competent providers willing to change attitudes and culture such as movement toward collaborative practice among the interprofessional team including the patient. PMID:27439249

  19. The Impacts of Different Meteorology Data Sets on Nitrogen Fate and Transport in the SWAT Watershed Model

    EPA Science Inventory

    In this study, we investigated how different meteorology data sets impacts nitrogen fate and transport responses in the Soil and Water Assessment Tool (SWAT) model. We used two meteorology data sets: National Climatic Data Center (observed) and Mesoscale Model 5/Weather Research ...

  20. Implementing the Career Domain of the American School Counselor Association's National Model into the Virtual Setting

    ERIC Educational Resources Information Center

    Terry, Laura Robin

    2012-01-01

    The implementation of the American School Counselor Association (ASCA) national model has not been studied in nontraditional settings such as in virtual schools. The purpose of this quantitative research study was to examine the implementation of the career domain of the ASCA national model into the virtual high school setting. Social cognitive…

  1. Development of a large-sample watershed-scale hydrometeorological data set for the contiguous USA: data set characteristics and assessment of regional variability in hydrologic model performance

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Clark, M. P.; Sampson, K.; Wood, A.; Hay, L. E.; Bock, A.; Viger, R. J.; Blodgett, D.; Brekke, L.; Arnold, J. R.; Hopson, T.; Duan, Q.

    2015-01-01

    We present a community data set of daily forcing and hydrologic response data for 671 small- to medium-sized basins across the contiguous United States (median basin size of 336 km2) that spans a very wide range of hydroclimatic conditions. Area-averaged forcing data for the period 1980-2010 was generated for three basin spatial configurations - basin mean, hydrologic response units (HRUs) and elevation bands - by mapping daily, gridded meteorological data sets to the subbasin (Daymet) and basin polygons (Daymet, Maurer and NLDAS). Daily streamflow data was compiled from the United States Geological Survey National Water Information System. The focus of this paper is to (1) present the data set for community use and (2) provide a model performance benchmark using the coupled Snow-17 snow model and the Sacramento Soil Moisture Accounting Model, calibrated using the shuffled complex evolution global optimization routine. After optimization minimizing daily root mean squared error, 90% of the basins have Nash-Sutcliffe efficiency scores ≥0.55 for the calibration period and 34% ≥ 0.8. This benchmark provides a reference level of hydrologic model performance for a commonly used model and calibration system, and highlights some regional variations in model performance. For example, basins with a more pronounced seasonal cycle generally have a negative low flow bias, while basins with a smaller seasonal cycle have a positive low flow bias. Finally, we find that data points with extreme error (defined as individual days with a high fraction of total error) are more common in arid basins with limited snow and, for a given aridity, fewer extreme error days are present as the basin snow water equivalent increases.

  2. An intercomparison of model ozone deficits in the upper stratosphere and mesosphere from two data sets

    NASA Astrophysics Data System (ADS)

    Siskind, David E.; Connor, Brian J.; Eckman, Richard S.; Remsberg, Ellis E.; Tsou, J. J.; Parrish, Alan

    1995-06-01

    We have compared a diurnal photochemical model of ozone with nighttime data from the limb infrared monitor of the stratosphere (LIMS) and ground-based microwave observations. Consistent with previous studies, the model underpredicts the observations by about 10-30%. This agreement is strong confirmation that the model ozone deficit is not simply an artifact of observational error since it is unlikely to occur for two completely different ozone data sets. We have also examined the seasonal, altitudinal, and diurnal morphology of the ozone deficit. Both comparisons show a deficit that peaks in the upper stratosphere (2-3 mbar) and goes through a minimum in the lower mesosphere from 1.0 to 0.4 mbar. At lower pressures (<0.2 mbar) the deficit appears to increase again. The seasonal variation of the deficit is less consistent. The deficit with respect to the LIMS data is least in winter while with respect to the microwave data, the deficit shows little seasonal variation. Finally, the night-to-day ratio in our model is in generally good agreement with that seen in the microwave experiment. Increasing the rate coefficient for the reaction O + O2 + M → O3 + M improves the fit, while a very large (50%) decrease in the HOx catalytic cycle is not consistent with our observations. Increasing the atomic oxygen recombination rate also improves the overall agreement with both data sets; however, a residual discrepancy still remains. There appears to be no single chemical parameter which, when modified, can simultaneously resolve both the stratospheric and mesospheric ozone deficits.

  3. Modelling Gravimetric Fluctuations due to Hydrological Processes in Active Volcanic Settings

    NASA Astrophysics Data System (ADS)

    Hemmings, B.; Gottsmann, J.; Whitaker, F.

    2014-12-01

    Both static and dynamic gravimetric surveys are widely used to monitor magmatic processes in active volcanic settings. However, attributing residual gravimetric signals solely to magma movement can result in misdiagnosis of a volcano's pre-eruptive state and incorrect assessment of hazard. The relative contribution of magmatic and aqueous fluids to integrated gravimetric and geodetic data has become an important topic for debate, particularly in restless caldera systems. Groundwater migration driven by volcanically-induced pressure changes, and groundwater mass fluctuations associated with seasonal and inter-annual variations in recharge may also contribute to measured gravity changes. Here we use numerical models to explore potential gravimetric signals associated with fundamental hydrological processes, focusing on variations in recharge and hydrogeological properties. TOUGH2 simulations demonstrate the significance of groundwater storage within a thick unsaturated zone (up to 100 m). Changes are dominantly in response to inter-annual recharge variations and can produce measurable absolute gravity variations of several 10s of μgal. Vadose zone storage and the rate of response to recharge changes depend on the hydrological properties. Porosity, relative and absolute permeability and capillary pressure conditions all affect the amplitude and frequency of modelled gravity time series. Spatial variations in hydrologic properties and importantly, hydrological recharge, can significantly affect the phase and amplitude of recorded gravity signals. Our models demonstrate the potential for an appreciable hydrological component within gravimetric measurements on volcanic islands. Characterisation of hydrological processes within a survey area may be necessary to robustly interpret gravity signals in settings with significant recharge fluctuations, a thick vadose zone and spatially variable hydrological properties. Such modelling enables further exploration of feedbacks

  4. Comparisons of Transport and Dispersion Model Predictions of the Mock Urban Setting Test Field Experiment

    NASA Astrophysics Data System (ADS)

    Warner, Steve; Platt, Nathan; Heagy, James F.; Jordan, Jason E.; Bieberbach, George

    2006-10-01

    The potential effects of a terrorist attack involving the atmospheric release of chemical, biological, radiological, nuclear, or other hazardous materials continue to be of concern to the United States. The Defense Threat Reduction Agency has developed a Hazard Prediction Assessment Capability (HPAC) that includes initial features to address hazardous releases within an urban environment. Improved characterization and understanding of urban transport and dispersion are required to allow for more robust modeling. In 2001, a scaled urban setting was created in the desert of Utah using shipping containers, and tracer gases were released. This atmospheric tracer and meteorological study is known as the Mock Urban Setting Test (MUST). This paper describes the creation of sets of HPAC predictions and comparisons with the MUST field experiment. Strong consistency between the conclusions of this study and a previously reported HPAC evaluation that relied on urban tracer observations within the downtown area of Salt Lake City was found. For example, in both cases, improved predictions were associated with the inclusion of a simple empirically based urban dispersion model within HPAC, whereas improvements associated with the inclusion of a more computationally intensive wind field module were not found. The use of meteorological observations closest to the array and well above the obstacle array—the sonic anemometer measurements 16 m above ground level—resulted in predictions with the best fit to the observed tracer concentrations. The authors speculate that including meteorological observations or vertical wind profiles above or upwind of an urban region might be a sufficient input to create reasonable HPAC hazard-area predictions.

  5. Setting the agenda: Different strategies of a Mass Media in a model of cultural dissemination

    NASA Astrophysics Data System (ADS)

    Pinto, Sebastián; Balenzuela, Pablo; Dorso, Claudio O.

    2016-09-01

    Day by day, people exchange opinions about news with relatives, friends, and coworkers. In most cases, they get informed about a given issue by reading newspapers, listening to the radio, or watching TV, i.e., through a Mass Media (MM). However, the importance of a given new can be stimulated by the Media by assigning newspaper's pages or time in TV programs. In this sense, we say that the Media has the power to "set the agenda", i.e., it decides which new is important and which is not. On the other hand, the Media can know people's concerns through, for instance, websites or blogs where they express their opinions, and then it can use this information in order to be more appealing to an increasing number of people. In this work, we study different scenarios in an agent-based model of cultural dissemination, in which a given Mass Media has a specific purpose: To set a particular topic of discussion and impose its point of view to as many social agents as it can. We model this by making the Media has a fixed feature, representing its point of view in the topic of discussion, while it tries to attract new consumers, by taking advantage of feedback mechanisms, represented by adaptive features. We explore different strategies that the Media can adopt in order to increase the affinity with potential consumers and then the probability to be successful in imposing this particular topic.

  6. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity

    PubMed Central

    Malinin, Laura H.

    2016-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person’s interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  7. Building predictive models for mechanism-of-action classification from phenotypic assay data sets.

    PubMed

    Berg, Ellen L; Yang, Jian; Polokoff, Mark A

    2013-12-01

    Compound mechanism-of-action information can be critical for drug development decisions but is often challenging for phenotypic drug discovery programs. One concern is that compounds selected by phenotypic screening will have a previously known but undesirable target mechanism. Here we describe a useful method for assigning mechanism class to compounds and bioactive agents using an 84-feature signature from a panel of primary human cell systems (BioMAP systems). For this approach, a reference data set of well-characterized compounds was used to develop predictive models for 28 mechanism classes using support vector machines. These mechanism classes encompass safety and efficacy-related mechanisms, include both target-specific and pathway-based classes, and cover the most common mechanisms identified in phenotypic screens, such as inhibitors of mitochondrial and microtubule function, histone deacetylase, and cAMP elevators. Here we describe the performance and the application of these predictive models in a decision scheme for triaging phenotypic screening hits using a previously published data set of 309 environmental chemicals tested as part of the Environmental Protection Agency's ToxCast program. By providing quantified membership in specific mechanism classes, this approach is suitable for identification of off-target toxicity mechanisms as well as enabling target deconvolution of phenotypic drug discovery hits. PMID:24088371

  8. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity.

    PubMed

    Malinin, Laura H

    2015-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person's interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  9. Paleomagnetic chronology of Arctic Ocean sediment cores; reversals and excursions the conundrum

    NASA Astrophysics Data System (ADS)

    Løvlie, R.; Jakobsson, M.; Backman, J.

    2003-04-01

    Chronologies of Arctic Ocean sediment cores are mainly based on interpretation of paleomagnetic inclination records. The first paleomagnetic chronology assigned zones with negative inclinations to polarity reversals (Steuerwald et al, 1968) because geomagnetic excursions at that time were a novel observation and had only been reported from lavas. Arctic Ocean sedimentation rates were thus established to be in the mm/ka-range. A general recognition of excursions as real features of the geomagnetic field emerged more than three decades later, and presently there is still no consensus regarding the number (or name), duration and age of global synchronous excursions within the Brunhes Chron. Assigning inclination records to polarity reversals or excursions is an ambiguous exercise without independent age information. Based on independently derived time frames, 11 negative inclination intervals in core 96/12-1pc from the Lomonosov Ridge were assigned to reported excursions resulting in cm/ka deposition rates (Jakobsson et al, 2000). However, the detail of the "excursional stratigraphy" in this core is problematic. The absence of two (three?) excursions in the upper 2 m of core (base MIS5) was tentatively suggested to reflect pDRM-erasing in this sandy part of the core, while the short extent of the inferred pre-Brunhes Matyuama Chron remains unaccounted for. We have recently retrieved a relative paleointensity record from a parallel core (96/B6-1pc) for alternative dating control and assessment of stratigraphic completeness and uniformity of deposition. This study indicated the presence of a hiatus of the order of 200 ka (Løvlie et al 2002). We present a paleointensity record from core 96/12-1pc and will address identification of depositional hiatuses and their significance in understanding the paleomagnetic record in Arctic Ocean cores. Steuerwald B.A., Clark D.L. and Andrew J.A., 1968. Magnetic stratigraphy and faunal patterns in Arctic Ocean sediments. Earth and

  10. An analysis of a joint shear model for jointed media with orthogonal joint sets; Yucca Mountain Site Characterization Project

    SciTech Connect

    Koteras, J.R.

    1991-10-01

    This report describes a joint shear model used in conjunction with a computational model for jointed media with orthogonal joint sets. The joint shear model allows nonlinear behavior for both joint sets. Because nonlinear behavior is allowed for both joint sets, a great many cases must be considered to fully describe the joint shear behavior of the jointed medium. An extensive set of equations is required to describe the joint shear stress and slip displacements that can occur for all the various cases. This report examines possible methods for simplifying this set of equations so that the model can be implemented efficiently form a computational standpoint. The shear model must be examined carefully to obtain a computationally efficient implementation that does not lead to numerical problems. The application to fractures in rock is discussed. 5 refs., 4 figs.

  11. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction

  12. Experimentally Verified Parameter Sets for Modelling Heterogeneous Neocortical Pyramidal-Cell Populations

    PubMed Central

    Harrison, Paul M.; Badel, Laurent; Wall, Mark J.; Richardson, Magnus J. E.

    2015-01-01

    Models of neocortical networks are increasingly including the diversity of excitatory and inhibitory neuronal classes. Significant variability in cellular properties are also seen within a nominal neuronal class and this heterogeneity can be expected to influence the population response and information processing in networks. Recent studies have examined the population and network effects of variability in a particular neuronal parameter with some plausibly chosen distribution. However, the empirical variability and covariance seen across multiple parameters are rarely included, partly due to the lack of data on parameter correlations in forms convenient for model construction. To addess this we quantify the heterogeneity within and between the neocortical pyramidal-cell classes in layers 2/3, 4, and the slender-tufted and thick-tufted pyramidal cells of layer 5 using a combination of intracellular recordings, single-neuron modelling and statistical analyses. From the response to both square-pulse and naturalistic fluctuating stimuli, we examined the class-dependent variance and covariance of electrophysiological parameters and identify the role of the h current in generating parameter correlations. A byproduct of the dynamic I-V method we employed is the straightforward extraction of reduced neuron models from experiment. Empirically these models took the refractory exponential integrate-and-fire form and provide an accurate fit to the perisomatic voltage responses of the diverse pyramidal-cell populations when the class-dependent statistics of the model parameters were respected. By quantifying the parameter statistics we obtained an algorithm which generates populations of model neurons, for each of the four pyramidal-cell classes, that adhere to experimentally observed marginal distributions and parameter correlations. As well as providing this tool, which we hope will be of use for exploring the effects of heterogeneity in neocortical networks, we also provide

  13. Paleoclimate variability during the Blake geomagnetic excursion (MIS 5d) deduced from a speleothem record

    NASA Astrophysics Data System (ADS)

    Rossi, Carlos; Mertz-Kraus, Regina; Osete, María-Luisa

    2014-10-01

    To evaluate possible connections between climate and the Earth's magnetic field, we examine paleoclimate proxies in a stalagmite (PA-8) recording the Blake excursion (˜112-˜116.4 ka) from Cobre cave (N Spain). Trace element, δ13C, δ18O, δ234U, fluorescent lamination, growth rate, and paleomagnetic records were synchronized using a floating lamina-counted chronology constrained by U-Th dates, providing a high-resolution multi-proxy paleoclimate record for MIS 5d. The alpine cave setting and the combination of proxies contributed to improve the confidence of the paleoclimatic interpretation. Periods of relatively warm and humid climate likely favored forest development and resulted in high speleothem growth rates, arguably annual fluorescent laminae, low δ13C and [Mg], and increased [Sr] and [Ba]. Colder periods limited soil activity and drip water availability, leading to reduced speleothem growth, poor development of fluorescent lamination, enhanced water-rock interaction leading to increased [Mg], δ13C, and δ234U, and episodic flooding. In the coldest and driest period recorded, evaporation caused simultaneous 18O and 13C enrichments and perturbed the trace element patterns. The Blake took place in a relatively warm interestadial at the inception of the Last Glacial period, but during a global cooling trend recorded in PA-8 by an overall decrease of δ18O and growth rate and increasing [Mg]. That trend culminated in the cessation of growth between ˜112 and ˜101 ka likely due to the onset of local glaciation correlated with Greenland stadial 25. That trend is consistent with a link between low geomagnetic intensity and climate cooling, but it does not prove it. Shorter term changes in relative paleointensity (RPI) relate to climate changes recorded in PA-8, particularly a prominent RPI low from ˜114.5 to ˜113 ka coincident with a significant cooling indicated by all proxy records, suggesting a link between geomagnetic intensity and climate at millennial

  14. "no snow - no skiing excursion - consequences of climatic change?"

    NASA Astrophysics Data System (ADS)

    Neunzig, Thilo

    2014-05-01

    Climatology and climate change have become central topics in Geography at our school. Because of that we set up a climatological station at our school. The data are an important basis to observe sudden changes in the weather. The present winter (2013/2014) shows the importance of climate change in Alzey / Germany. In winter many students think of the yearly skiing trip to Schwaz / Austria which is part of our school programme. Due to that the following questions arise: Will skiing still be possible if climate change accelerates? How are the skiing regions in the Alpes going to change? What will happen in about 20 years? How does artificial snow change the landscape and the skiing sport? Students have to be aware of the ecological damage of skiing trips. Each class has to come up with a concept how these trips can be as environmentally friendly as possible. - the trip is for a restricted number of students only (year 8 only) - a small skiing region is chosen which is not overcrowded - snow has to be guaranteed in the ski area to avoid the production of artificial snow (avoidance of high water consumption) - the bus arrives with a class and returns with the one that had been there before These are but a few ideas of students in order to make their trip as environmentally friendly as possible. What is missing is only what is going to happen in the future. What will be the effect of climate change for skiing regions in the secondary mountains? How is the average temperature for winter going to develop? Are there possibilities for summer tourism (e.g. hiking) instead of skiing in winter? The students are going to try to find answers to these questions which are going to be presented on a poster on the GIFT-Workshop in Vienna.

  15. Modeling the Behavior of an Underwater Acoustic Relative Positioning System Based on Complementary Set of Sequences

    PubMed Central

    Aparicio, Joaquín; Jiménez, Ana; Álvarez, Fernando J.; Ureña, Jesús; De Marziani, Carlos; Diego, Cristina

    2011-01-01

    The great variability usually found in underwater media makes modeling a challenging task, but helpful for better understanding or predicting the performance of future deployed systems. In this work, an underwater acoustic propagation model is presented. This model obtains the multipath structure by means of the ray tracing technique. Using this model, the behavior of a relative positioning system is presented. One of the main advantages of relative positioning systems is that only the distances between all the buoys are needed to obtain their positions. In order to obtain the distances, the propagation times of acoustic signals coded by Complementary Set of Sequences (CSS) are used. In this case, the arrival instants are obtained by means of correlation processes. The distances are then used to obtain the position of the buoys by means of the Multidimensional Scaling Technique (MDS). As an early example of an application using this relative positioning system, a tracking of the position of the buoys at different times is performed. With this tracking, the surface current of a particular region could be studied. The performance of the system is evaluated in terms of the distance from the real position to the estimated one. PMID:22247661

  16. Collaborative effects-based planning using adversary models and target set optimization

    NASA Astrophysics Data System (ADS)

    Pioch, Nicholas J.; Daniels, Troy; Pielech, Bradford

    2004-08-01

    The Strategy Development Tool (SDT), sponsored by AFRL-IFS, supports effects-based planning at multiple levels of war through three core capabilities: plan authoring, center of gravity (COG) modeling and analysis, and target system analysis. This paper describes recent extensions to all three of these capabilities. The extended plan authoring subsystem supports collaborative planning in which a user delegates elaboration of objectives to other registered users. A suite of collaboration tools allows planners to assign planning tasks, submit plan fragments, and review submitted plans, while a collaboration server transparently handles message routing and persistence. The COG modeling subsystem now includes an enhanced adversary modeling tool that provides a lightweight ontology for building temporal causal models relating enemy goals, beliefs, actions, and resources across multiple types of COGs. Users may overlay friendly interventions, analyze their impact on enemy COGs, and automatically incorporate the causal chains stemming from the best interventions into the current plan. Finally, the target system analysis subsystem has been extended with option generation tools that use network-based optimization algorithms to select candidate target set options to achieve specified effects.

  17. A moist Boussinesq shallow water equations set for testing atmospheric models

    NASA Astrophysics Data System (ADS)

    Zerroukat, M.; Allen, T.

    2015-06-01

    The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. [1] being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allow the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics-physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain.

  18. Data Model for Astronomical DataSet Characterisation Version 1.12

    NASA Astrophysics Data System (ADS)

    Louys, Mireille; Richards, Anita; Bonnarel, François; Micol, Alberto; Chilingarian, Igor; McDowell, Jonathan; IVOA Data Model Working Group; Louys, Mireille; Richards, Anita; Bonnarel, François; Micol, Alberto; Chilingarian, Igor; McDowell, Jonathan

    2007-11-01

    This document defines the high level metadata necessary to describe the physical parameter space of observed or simulated astronomical data sets, such as 2D-images, data cubes, X-ray event lists, IFU data, etc. The Characterisation data model is an abstraction which can be used to derive a structured description of any relevant data and thus to facilitate its discovery and scientific interpretation. The model aims at facilitating the manipulation of heterogeneous data in any VO framework or portal. A VO Characterisation instance can include descriptions of the data axes, the range of coordinates covered by the data, and details of the data sampling and resolution on each axis. These descriptions should be in terms of physical variables, independent of instrumental signatures as far as possible. Implementations of this model has been described in the IVOA Note available at: http://www.ivoa.net/Documents/Notes/ImplemtationCharacDM/ImplementationCharacterisation-20070813.pdf Utypes derived from the UML model are listed and commented in the following IVOA Note: http://www.ivoa.net/Documents/latest/UtypeListCharacterisationDM.html

  19. Modeling the behavior of an underwater acoustic relative positioning system based on complementary set of sequences.

    PubMed

    Aparicio, Joaquín; Jiménez, Ana; Alvarez, Fernando J; Ureña, Jesús; De Marziani, Carlos; Diego, Cristina

    2011-01-01

    The great variability usually found in underwater media makes modeling a challenging task, but helpful for better understanding or predicting the performance of future deployed systems. In this work, an underwater acoustic propagation model is presented. This model obtains the multipath structure by means of the ray tracing technique. Using this model, the behavior of a relative positioning system is presented. One of the main advantages of relative positioning systems is that only the distances between all the buoys are needed to obtain their positions. In order to obtain the distances, the propagation times of acoustic signals coded by Complementary Set of Sequences (CSS) are used. In this case, the arrival instants are obtained by means of correlation processes. The distances are then used to obtain the position of the buoys by means of the Multidimensional Scaling Technique (MDS). As an early example of an application using this relative positioning system, a tracking of the position of the buoys at different times is performed. With this tracking, the surface current of a particular region could be studied. The performance of the system is evaluated in terms of the distance from the real position to the estimated one. PMID:22247661

  20. The Moho in extensional tectonic settings: insights from thermo-mechanical models

    NASA Astrophysics Data System (ADS)

    Cloetingh, Sierd; Burov, Evgenii; Liviu, Matenco

    2013-04-01

    We review consequences for the crustal and lithospheric configuration of different models for the thermo-mechanical evolution of continental lithosphere in extensional tectonic settings. The lithospheric memory is key for the interplay of lithospheric stresses and rheological structure of the extending lithosphere and for its later tectonic reactivation. Other important factors are the temporal and spatial migration of extension and the interplay of rifting and surface processes. The mode of extension and the duration of the rifting phase required to lead to continental break-up is to a large extent controlled by the interaction of the extending plate with slab dynamics. We compare predictions from numerical models with observational constraints from a number of rifted back-arc basin settings and intraplate domains at large distance from convergent plate boundaries. We discuss the record of vertical motions during and after rifting in the context of stretching models developed to quantify rifted basin formation. The finite strength of the lithosphere has an important effect on the formation of extensional basins. This applies both to the geometry of the basin shape as well as to the record of vertical motions during and after rifting. We demonstrate a strong connection between the bulk rheological properties of Europe's lithosphere and the evolution of some of Europe's main rifts and back-arc system. The thermomechanical structure of the lithosphere has a major impact on continental breakup and associated basin migration processes, with direct relationships between rift duration and extension velocities, thermal evolution, and the role of mantle plumes. Compressional reactivation has important consequences for post-rift inversion, borderland uplift, and denudation, as illustrated by polyphase deformation of extensional back-arc basins in the Black Sea and the Pannonian Basin.

  1. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  2. Generating 3D anatomically detailed models of the retina from OCT data sets: implications for computational modelling

    NASA Astrophysics Data System (ADS)

    Shalbaf, Farzaneh; Dokos, Socrates; Lovell, Nigel H.; Turuwhenua, Jason; Vaghefi, Ehsan

    2015-12-01

    Retinal prosthesis has been proposed to restore vision for those suffering from the retinal pathologies that mainly affect the photoreceptors layer but keep the inner retina intact. Prior to costly risky experimental studies computational modelling of the retina will help to optimize the device parameters and enhance the outcomes. Here, we developed an anatomically detailed computational model of the retina based on OCT data sets. The consecutive OCT images of individual were subsequently segmented to provide a 3D representation of retina in the form of finite elements. Thereafter, the electrical properties of the retina were modelled by implementing partial differential equation on the 3D mesh. Different electrode configurations, that is bipolar and hexapolar configurations, were implemented and the results were compared with the previous computational and experimental studies. Furthermore, the possible effects of the curvature of retinal layers on the current steering through the retina were proposed and linked to the clinical observations.

  3. Mono Lake Excursion as a Chronologic Marker in the U.S. Great Basin

    NASA Astrophysics Data System (ADS)

    Liddicoat, J. C.; Coe, R. S.; Knott, J. R.

    2008-05-01

    Nevada, Utah, and California east of the Sierra Nevada are in the Great Basin physiographic province of western North America. During periods of the Pleistocene, Lake Bonneville and Lake Lahontan covered valleys in Utah and Nevada, respectively, and other lakes such as Lake Russell in east-central California did likewise (Feth, 1964). Now dry except for its remnant, Mono Lake, Lake Russell provides an opportunity to study behavior of Earth's past magnetic field in lacustrine sediments that are exposed in natural outcrops. The sediments record at least 30,000 years of paleomagnetic secular variation (Liddicoat, 1976; Zimmerman et al., 2006) and have been of particular interest since the discovery of the Mono Lake Excursion (MLE) by Denham and Cox (1971) because the field behavior can be documented at numerous sites around Mono Lake (Liddicoat and Coe, 1979, Liddicoat, 1992; Coe and Liddicoat, 1994) and on Paoha Island in the lake. Moreover, there have been recent attempts to date the excursion (Kent et al., 2002, Benson et al., 2003) more accurately and use the age and relative field intensity in paleoclimate research (Zimmerman et al., 2006). It has been proposed that the excursion in the Mono Basin might be older than originally believed (Denham and Cox, 1971; Liddicoat and Coe, 1979) and instead be the Laschamp Excursion (LE), ~ 40,000 yrs B.P. (Guillou et al., 2004), on the basis of 14C and 40Ar/39Ar dates (Kent et al., 2002) and the relative paleointensity record (Zimmerman et al., 2006) for the excursion in the Mono Basin. On the contrary, we favor a younger age for the excursion, ~ 32,000 yrs B.P., using the relative paleointensity at the Mono and Lahontan basins and 14C dates from the Lahontan Basin (Benson et al., 2003). The age of ~ 32,000 yrs B.P. is in accord with the age (32,000-34,000 yrs B.P.) reported by Channell (2006) for the MLE at Ocean Drilling Program (ODP) Site 919 in the Irminger Basin in the North Atlantic Ocean, which contains as well an

  4. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    PubMed

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390

  5. Cerium anomaly across the mid-Tournaisian carbon isotope excursion (TICE)

    NASA Astrophysics Data System (ADS)

    Jiang, G.; Morales, D. C.; Maharjan, D. K.

    2015-12-01

    The Early Mississippian (ca. 359-345 Ma) represents one of the most important greenhouse-icehouse climate transitions in Earth history. Closely associated with this critical transition is a prominent positive carbon isotope excursion (δ13C ≥ +5‰) that has been documented from numerous stratigraphic successions across the globe. This δ13C excursion, informally referred to as the TICE (mid-Tournaisian carbon isotope excursion) event, has been interpreted as resulting from enhanced organic carbon burial, with anticipated outcomes including the lowering of atmospheric CO2 and global cooling, the growth of continental ice sheets and sea-level fall, and the increase of ocean oxygenation and ocean redox changes. The casual relationship between these events has been addressed from various perspectives but not yet clearly demonstrated. To document the potential redox change associated with the perturbation of the carbon cycle, we have analyzed rare earth elements (REE) and trace elements across the TICE in two sections across a shallow-to-deep water transect in the southern Great Basin (Utah and Nevada), USA. In both sections, the REE data show a significant positive cerium (Ce) anomaly (Ce/Ce* = Ce/(0.5La+0.5Pr)). Prior to the positive δ13C shift, most Ce/Ce* values are around 0.3 (between 0.2 and 0.4). Across the δ13C peak, Ce/Ce* values increase up to 0.87, followed by a decrease back to 0.2~0.3 after the δ13C excursion (Figure 1). The positive Ce anomaly is best interpreted as recording expansion of oxygen minimum zone and anoxia resulted from increased primary production. This is consistent with a significant increase of nitrogen isotopes (δ15N) across the δ13C peak. Integration of the carbon, nitrogen, and REE data demonstrates a responsive earth systems change linked to the perturbation of the Early Mississippian carbon cycle.

  6. Diaphragm dome surface segmentation in CT data sets: a 3D active appearance model approach

    NASA Astrophysics Data System (ADS)

    Beichel, Reinhard; Gotschuli, Georg; Sorantin, Erich; Leberl, Franz W.; Sonka, Milan

    2002-05-01

    Knowledge about the location of the diaphragm dome surface, which separates the lungs and the heart from the abdominal cavity, is of vital importance for applications like automated segmentation of adjacent organs (e.g., liver) or functional analysis of the respiratory cycle. We present a new 3D Active Appearance Model (AAM) approach to segmentation of the top layer of the diaphragm dome. The 3D AAM consists of three parts: a 2D closed curve (reference curve), an elevation image and texture layers. The first two parts combined represent 3D shape information and the third part image intensity of the diaphragm dome and the surrounding layers. Differences in height between dome voxels and a reference plane are stored in the elevation image. The reference curve is generated by a parallel projection of the diaphragm dome outline in the axial direction. Landmark point placement is only done on the (2D) reference curve, which can be seen as the bounding curve of the elevation image. Matching is based on a gradient-descent optimization process and uses image intensity appearance around the actual dome shape. Results achieved in 60 computer generated phantom data sets show a high degree of accuracy (positioning error -0.07+/-1.29 mm). Validation using real CT data sets yielded a positioning error of -0.16+/-2.95 mm. Additional training and testing on in-vivo CT image data is ongoing.

  7. Model for the fast estimation of basis set superposition error in biomolecular systems

    PubMed Central

    Faver, John C.; Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    Basis set superposition error (BSSE) is a significant contributor to errors in quantum-based energy functions, especially for large chemical systems with many molecular contacts such as folded proteins and protein-ligand complexes. While the counterpoise method has become a standard procedure for correcting intermolecular BSSE, most current approaches to correcting intramolecular BSSE are simply fragment-based analogues of the counterpoise method which require many (two times the number of fragments) additional quantum calculations in their application. We propose that magnitudes of both forms of BSSE can be quickly estimated by dividing a system into interacting fragments, estimating each fragment's contribution to the overall BSSE with a simple statistical model, and then propagating these errors throughout the entire system. Such a method requires no additional quantum calculations, but rather only an analysis of the system's interacting fragments. The method is described herein and is applied to a protein-ligand system, a small helical protein, and a set of native and decoy protein folds. PMID:22010701

  8. Low contrast detectability in CT for human and model observers in multi-slice data sets

    NASA Astrophysics Data System (ADS)

    Ba, Alexandre; Racine, Damien; Ott, Julien G.; Verdun, Francis R.; Kobbe-Schmidt, Sabine; Eckstein, Miguel P.; Bochud, Francois O.

    2015-03-01

    Task-based medical image quality is often assessed by model observers for single slice images. The goal of the study was to determine if model observers can predict human detection performance of low contrast signals in CT for clinical multi-slice (ms) images. We collected 24 different data subsets from a low contrast phantom: 3 dose levels (40, 90, 150 mAs), 4 signals (6 and 8 mm diameter; 10 and 20 HU at 120kV) and 2 reconstruction algorithms (FBP and iterative (IR)). Images were assessed by human and model observers in 4-alternative forced choice (4AFC) experiments with ms data set in a signal-known-exactly (SKE) paradigm. Model observers with single (msCHOa) and multiple (msCHOb) templates were implemented in a train and test method analysis with Dense Difference of Gaussian (DDoG) and Gabor spatial channels. For human observers, we found that percent correct increased with the dose and was higher for iterative reconstructed images than FBP in all investigated conditions. All model observers implemented overestimated human performance in any condition except one case (6mm and 10HU) for msCHOa and msCHOb with Gabor channels. Internal noise could be implemented and a good agreement was found but necessitates independent fits according to the reconstruction method. Generally msCHOb shows higher detection performance than msCHOa with both types of channels. Gabor channels were less efficient than DDoG in this context. These results allow further developments in 3D analysis technique for low contrast CT.

  9. Review of the recording and age of the Mono Lake Excursion

    NASA Astrophysics Data System (ADS)

    Coe, R.; Liddicoat, J.

    2009-04-01

    Among the brief departures from gradual, long-term behaviour of the palaeomagnetic field in the Brunhes Normal Chron that reached opposite polarity or have a Virtual Geomagnetic Pole deep in the southern hemisphere, the first to be reported is the Laschamp Excursion (LE) in volcanic rocks in the Massif Central in France (Bonhommet and Zahringer, 1969). They originally believed it occurred between about 9,000 to 20,000 years before present, but it is now assigned an age of about 40,000 years B.P. (Guillou et al., 2004). Denham and Cox (1971) unsuccessfully sought the LE in exposed lake sediments that seemed to span that interval in the Mono Basin in the western Great Basin of the U.S., but instead encountered anomalous field behaviour that is called the Mono Lake Excursion (MLE)(Liddicoat and Coe, 1979). As a tribute to Norbert Bonhommet, who assisted us in our initial field work in the Mono Basin and shared a long-standing interest in the LE and MLE, we will review the palaeomagnetic behaviour and age of the MLE in the Mono Basin and elsewhere, for which there are nearly 20 reports of its occurrence globally, and evaluate the recent suggestion that the excursion at Mono Lake and the LE are the same.

  10. The Laschamp geomagnetic excursion featured in nitrate record from EPICA-Dome C ice core

    PubMed Central

    Traversi, R.; Becagli, S.; Poluianov, S.; Severi, M.; Solanki, S. K.; Usoskin, I. G.; Udisti, R.

    2016-01-01

    Here we present the first direct comparison of cosmogenic 10Be and chemical species in the period of 38–45.5 kyr BP spanning the Laschamp geomagnetic excursion from the EPICA-Dome C ice core. A principal component analysis (PCA) allowed to group different components as a function of the main sources, transport and deposition processes affecting the atmospheric aerosol at Dome C. Moreover, a wavelet analysis highlighted the high coherence and in-phase relationship between 10Be and nitrate at this time. The evident preferential association of 10Be with nitrate rather than with other chemical species was ascribed to the presence of a distinct source, here labelled as “cosmogenic”. Both the PCA and wavelet analyses ruled out a significant role of calcium in driving the 10Be and nitrate relationship, which is particularly relevant for a plateau site such as Dome C, especially in the glacial period during which the Laschamp excursion took place. The evidence that the nitrate record from the EDC ice core is able to capture the Laschamp event hints toward the possibility of using this marker for studying galactic cosmic ray flux variations and thus also major geomagnetic field excursions at pluri-centennial-millennial time scales, thus opening up new perspectives in paleoclimatic studies. PMID:26819064

  11. Comparison of Dynamic Balance in Collegiate Field Hockey and Football Players Using Star Excursion Balance Test

    PubMed Central

    Bhat, Rashi; Moiz, Jamal Ali

    2013-01-01

    Purpose The preliminary study aimed to compare dynamic balance between collegiate athletes competing or training in football and hockey using star excursion balance test. Methods A total thirty university level players, football (n = 15) and field hockey (n = 15) were participated in the study. Dynamic balance was assessed by using star excursion balance test. The testing grid consists of 8 lines each 120 cm in length extending from a common point at 45° increments. The subjects were instructed to maintain a stable single leg stance with the test leg with shoes off and to reach for maximal distance with the other leg in each of the 8 directions. A pencil was used to point and read the distance to which each subject's foot reached. The normalized leg reach distances in each direction were summed for both limbs and the total sum of the mean of summed normalized distances of both limbs were calculated. Results There was no significant difference in all the directions of star excursion balance test scores in both the groups. Additionally, composite reach distances of both groups also found non-significant (P=0.5). However, the posterior (P=0.05) and lateral (P=0.03) normalized reach distances were significantly more in field hockey players. Conclusion Field hockey players and football players did not differ in terms of dynamic balance. PMID:24427482

  12. Effects of altered responsibility, congnitive set, and modeling on physical aggression and deindividuation.

    PubMed

    Diener, E; Dineen, J; Endresen, K; Beaman, A L; Fraser, S C

    1975-02-01

    This laboratory investigation using 64 college students as subjects assessed the role of three disinhibiting variables in producing both physical aggression and an internal state of deindividuation. Altered responsibility, congnitive set, and modeling were manipulated in a factorial design, and all three variables significantly increased physical aggression. No interaction produced significant results. The increase due to altered responsibility and varying cognitions supports Zimbardo's theory of deindividuation which relates certain input variables to wild, impulsive behavior. Questionnaire data indicated that the increase in aggression was not accompanied by internal mediational factors such as reduced self-awareness. It appears that disinhibiting forces may produce increases in antisocial behavior without necessarily producing a deindividuated internal state. PMID:1123716

  13. A two-dimensional volatility basis set - Part 3: Prognostic modeling and NOx dependence

    NASA Astrophysics Data System (ADS)

    Chuang, W. K.; Donahue, N. M.

    2015-06-01

    When NOx is introduced to organic emissions, aerosol production is sometimes, but not always, reduced. Under certain conditions, these interactions will instead increase aerosol concentrations. We expanded the two-dimensional volatility basis set (2-D-VBS) to include the effects of NOx on aerosol formation. This includes the formation of organonitrates, where the addition of a nitrate group contributes to a decrease of 2.5 orders of magnitude in volatility. With this refinement, we model outputs from experimental results, such as the atomic N : C ratio, organonitrate mass, and nitrate fragments in AMS measurements. We also discuss the mathematical methods underlying the implementation of the 2-D-VBS and provide the complete code in the Supplemental material. A developer version is available on Bitbucket, an online community repository.

  14. A two-dimensional volatility basis set - Part 3: Prognostic modeling and NOx dependence

    NASA Astrophysics Data System (ADS)

    Chuang, W. K.; Donahue, N. M.

    2016-01-01

    When NOx is introduced to organic emissions, aerosol production is sometimes, but not always, reduced. Under certain conditions, these interactions will instead increase aerosol concentrations. We expanded the two-dimensional volatility basis set (2D-VBS) to include the effects of NOx on aerosol formation. This includes the formation of organonitrates, where the addition of a nitrate group contributes to a decrease of 2.5 orders of magnitude in volatility. With this refinement, we model outputs from experimental results, such as the atomic N : C ratio, organonitrate mass, and nitrate fragments in Aerosol Mass Spectrometer (AMS) measurements. We also discuss the mathematical methods underlying the implementation of the 2D-VBS and provide the complete code in the Supplement. A developer version is available on Bitbucket, an online community repository.

  15. The simulation of a criticality accident excursion occurring in a simple fast metal system using the coupled neutronic-hydrodynamic method

    SciTech Connect

    Myers, W.L.

    1996-12-31

    Analysis of a criticality accident scenario occuring in a simple fast metal system using the coupled neutronic-hydrodynamic method is demonstrated by examining the last Godiva-I criticality accident. The basis tools and information for creating a coupled neutronic-hydrodynamic code are presented. Simplifying assumptions and approximations for creating an idealized model for the Godiva-I system are discussed. Estimates of the total energy generation and the maximum attainable kinetic energy yield are the most important results that are obtained from the code. With modifications, the methodology presented in this paper can be extended to analyze criticality accident excursions in other kinds of nuclear systems.

  16. Modelling adult Aedes aegypti and Aedes albopictus survival at different temperatures in laboratory and field settings

    PubMed Central

    2013-01-01

    Background The survival of adult female Aedes mosquitoes is a critical component of their ability to transmit pathogens such as dengue viruses. One of the principal determinants of Aedes survival is temperature, which has been associated with seasonal changes in Aedes populations and limits their geographical distribution. The effects of temperature and other sources of mortality have been studied in the field, often via mark-release-recapture experiments, and under controlled conditions in the laboratory. Survival results differ and reconciling predictions between the two settings has been hindered by variable measurements from different experimental protocols, lack of precision in measuring survival of free-ranging mosquitoes, and uncertainty about the role of age-dependent mortality in the field. Methods Here we apply generalised additive models to data from 351 published adult Ae. aegypti and Ae. albopictus survival experiments in the laboratory to create survival models for each species across their range of viable temperatures. These models are then adjusted to estimate survival at different temperatures in the field using data from 59 Ae. aegypti and Ae. albopictus field survivorship experiments. The uncertainty at each stage of the modelling process is propagated through to provide confidence intervals around our predictions. Results Our results indicate that adult Ae. albopictus has higher survival than Ae. aegypti in the laboratory and field, however, Ae. aegypti can tolerate a wider range of temperatures. A full breakdown of survival by age and temperature is given for both species. The differences between laboratory and field models also give insight into the relative contributions to mortality from temperature, other environmental factors, and senescence and over what ranges these factors can be important. Conclusions Our results support the importance of producing site-specific mosquito survival estimates. By including fluctuating temperature regimes

  17. Computer Modeling of Electrostatic Aggregation of Granular Materials in Planetary and Astrophysical Settings

    NASA Technical Reports Server (NTRS)

    Marshall, J.; Sauke, T.

    1999-01-01

    Electrostatic forces strongly influence the behavior of granular materials in both dispersed (cloud) systems and semi-packed systems. These forces can cause aggregation or dispersion of particles and are important in a variety of astrophysical and planetary settings. There are also many industrial and commercial settings where granular matter and electrostatics become partners for both good and bad. This partnership is important for human exploration on Mars where dust adheres to suits, machines, and habitats. Long-range Coulombic (electrostatic) forces, as opposed to contact-induced dipoles and van der Waals attractions, are generally regarded as resulting from net charge. We have proposed that in addition to net charge interactions, randomly distributed charge carriers on grains will result in a dipole moment regardless of any net charge. If grains are unconfined, or fluidized, they will rotate so that the dipole always induces attraction between grains. Aggregates are readily formed, and Coulombic polarity resulting from the dipole produces end-to-end stacking of grains to form filamentary aggregates. This has been demonstrated in USML experiments on Space Shuttle where microgravity facilitated the unmasking of static forces. It has also been demonstrated in a computer model using grains with charge carriers of both sign. Model results very closely resembled micro-g results with actual sand grains. Further computer modeling of the aggregation process has been conducted to improve our understanding of the aggregation process, and to provide a predictive tool for microgravity experiments slated for Space Station. These experiments will attempt to prove the dipole concept as outlined above. We have considerably enhanced the original computer model: refinements to the algorithm have improved the fidelity of grain behavior during grain contact, special attention has been paid to simulation time steps to enable establishment of a meaningful, quantitative time axis

  18. Experimental modelling of tectonics-erosion-sedimentation interactions in compressional, extensional, and strike-slip settings

    NASA Astrophysics Data System (ADS)

    Graveleau, Fabien; Strak, Vincent; Dominguez, Stéphane; Malavieille, Jacques; Chatton, Marina; Manighetti, Isabelle; Petit, Carole

    2015-09-01

    Tectonically controlled landforms develop morphologic features that provide useful markers to investigate crustal deformation and relief growth dynamics. In this paper, we present results of morphotectonic experiments obtained with an innovative approach combining tectonic and surface processes (erosion, transport, and sedimentation), coupled with accurate model monitoring techniques. This approach allows for a qualitative and quantitative analysis of landscape evolution in response to active deformation in the three end-member geological settings: compression, extension, and strike-slip. Experimental results outline first that experimental morphologies evolve significantly at a short time scale. Numerous morphologic markers form continuously, but their lifetime is generally short because erosion and sedimentation processes tend to destroy or bury them. For the compressional setting, the formation of terraces above an active thrust appears mainly controlled by narrowing and incision of the main channel through the uplifting hanging-wall and by avulsion of deposits on fan-like bodies. Terrace formation is irregular even under steady tectonic rates and erosional conditions. Terrace deformation analysis allows retrieving the growth history of the structure and the fault slip rate evolution. For the extensional setting, the dynamics of hanging-wall sedimentary filling appears to control the position of the base level, which in turn controls footwall erosion. Two phases of relief evolution can be evidenced: the first is a phase of relief growth, and the second is a phase of upstream propagation of topographic equilibrium that is reached first in the sedimentary basin. During the phase of relief growth, the formation of triangular facets occurs by degradation of the fault scarp, and their geometry (height) becomes stationary during the phase of upstream propagation of the topographic equilibrium. For the strike-slip setting, the complex morphology of the wrench zone

  19. Global combustion sources of organic aerosols: model comparison with 84 AMS factor-analysis data sets

    NASA Astrophysics Data System (ADS)

    Tsimpidi, Alexandra P.; Karydis, Vlassis A.; Pandis, Spyros N.; Lelieveld, Jos

    2016-07-01

    Emissions of organic compounds from biomass, biofuel, and fossil fuel combustion strongly influence the global atmospheric aerosol load. Some of the organics are directly released as primary organic aerosol (POA). Most are emitted in the gas phase and undergo chemical transformations (i.e., oxidation by hydroxyl radical) and form secondary organic aerosol (SOA). In this work we use the global chemistry climate model ECHAM/MESSy Atmospheric Chemistry (EMAC) with a computationally efficient module for the description of organic aerosol (OA) composition and evolution in the atmosphere (ORACLE). The tropospheric burden of open biomass and anthropogenic (fossil and biofuel) combustion particles is estimated to be 0.59 and 0.63 Tg, respectively, accounting for about 30 and 32 % of the total tropospheric OA load. About 30 % of the open biomass burning and 10 % of the anthropogenic combustion aerosols originate from direct particle emissions, whereas the rest is formed in the atmosphere. A comprehensive data set of aerosol mass spectrometer (AMS) measurements along with factor-analysis results from 84 field campaigns across the Northern Hemisphere are used to evaluate the model results. Both the AMS observations and the model results suggest that over urban areas both POA (25-40 %) and SOA (60-75 %) contribute substantially to the overall OA mass, whereas further downwind and in rural areas the POA concentrations decrease substantially and SOA dominates (80-85 %). EMAC does a reasonable job in reproducing POA and SOA levels during most of the year. However, it tends to underpredict POA and SOA concentrations during winter indicating that the model misses wintertime sources of OA (e.g., residential biofuel use) and SOA formation pathways (e.g., multiphase oxidation).

  20. Potential carbon sequestration of European arable soils estimated by modelling a comprehensive set of management practices.

    PubMed

    Lugato, Emanuele; Bampa, Francesca; Panagos, Panos; Montanarella, Luca; Jones, Arwyn

    2014-11-01

    Bottom-up estimates from long-term field experiments and modelling are the most commonly used approaches to estimate the carbon (C) sequestration potential of the agricultural sector. However, when data are required at European level, important margins of uncertainty still exist due to the representativeness of local data at large scale or different assumptions and information utilized for running models. In this context, a pan-European (EU + Serbia, Bosnia and Herzegovina, Montenegro, Albania, Former Yugoslav Republic of Macedonia and Norway) simulation platform with high spatial resolution and harmonized data sets was developed to provide consistent scenarios in support of possible carbon sequestration policies. Using the CENTURY agroecosystem model, six alternative management practices (AMP) scenarios were assessed as alternatives to the business as usual situation (BAU). These consisted of the conversion of arable land to grassland (and vice versa), straw incorporation, reduced tillage, straw incorporation combined with reduced tillage, ley cropping system and cover crops. The conversion into grassland showed the highest soil organic carbon (SOC) sequestration rates, ranging between 0.4 and 0.8 t C ha(-1)  yr(-1) , while the opposite extreme scenario (100% of grassland conversion into arable) gave cumulated losses of up to 2 Gt of C by 2100. Among the other practices, ley cropping systems and cover crops gave better performances than straw incorporation and reduced tillage. The allocation of 12 to 28% of the European arable land to different AMP combinations resulted in a potential SOC sequestration of 101-336 Mt CO2 eq. by 2020 and 549-2141 Mt CO2 eq. by 2100. Modelled carbon sequestration rates compared with values from an ad hoc meta-analysis confirmed the robustness of these estimates. PMID:24789378

  1. Greater volumes of static and dynamic stretching within a warm-up do not impair star excursion balance performance.

    PubMed

    Belkhiria-Turki, L; Chaouachi, A; Turki, O; Hammami, R; Chtara, M; Amri, M; Drinkwater, E J; Behm, D G

    2014-06-01

    Based on the conflicting static stretching (SS) literature and lack of dynamic stretching (DS) literature regarding the effects of differing volumes of stretching on balance, the present study investigated the effects of 4, 8, and 12 sets of SS and DS following a 5 min aerobic running warm-up on the star excursion balance test (SEBT). The objective was to examine an optimal stretch modality and volume to enhance dynamic balance. A randomized, within-subjects experimental design with repeated measures for stretching (SS and DS) versus no-stretching treatment was used to examine the acute effects of 10 (4 sets), 20 (8 sets), and 30 (12 sets) min, of 15s repetitions per muscle of SS and/or DS following a 5 min aerobic warm-up on the performance of the SEBT. Results indicated that a warm-up employing either SS or DS of any volume generally improves SEBT by a "small" amount with effect sizes ranging from 0.06 to 0.50 (11 of 18 conditions>75% likely to exceed the 1.3-1.9% smallest worthwhile change). Secondly, the difference between static and dynamic warm-up on this observed improvement with warm-up improvement was "trivial" to "moderate" (d=0.04 to 0.57) and generally "unclear" (only two of nine conditions>75% likely to exceed the smallest worthwhile change). Finally, the effect of increasing the volume of warm-up on the observed improvement with a warm-up is "trivial" to "small" (d<0.40) and generally "unclear" (only three of 12 conditions>75% likely to exceed the smallest worthwhile change). In summary, an aerobic running warm-up with stretching that increases core and muscle temperature whether it involves SS or DS may be expected to provide small improvements in the SEBT. PMID:24739290

  2. Mono Lake Excursion in Cored Sediment from the Eastern Tyrrhenian Sea

    NASA Astrophysics Data System (ADS)

    Liddicoat, Joseph; Iorio, Marina; Sagnotti, Leonardo; Incoronato, Alberto

    2013-04-01

    A search for the Laschamp and Mono Lake excursions in cored marine and lacustrine sediment younger than 50,000 years resulted in the discovery of both excursions in the Greenland Sea (73.3˚ N, 351.0˚ E, Nowaczyk and Antanow, 1997), in the North Atlantic Ocean (62.7˚ N, 222.5˚ E, Channell, 2006), in Pyramid Lake in the Lahontan Basin, NV, USA (40.1˚ N, 240.2˚ E, Benson et al., 2008), and in the Black Sea (43.2˚ N, 36.5˚ E, Nowaczyk et al., 2012). The inclination, declination, and relative field intensity during the Mono Lake Excursion (MLE) in the Black Sea sediment matches well the behaviour of the excursion in the Mono Basin, CA, in that a reduction in inclination during westerly declination is soon followed by steep positive inclination when declination is easterly, and relative field intensity increases after a low at the commencement of the excursion (Liddicoat and Coe, 1979). A large clockwise loop of Virtual Geomagnetic Poles (VGPs) at the Black Sea when followed from old to young patterns very well the VGP loop formed by the older portion of the MLE in the Mono Basin (Liddicoat and Coe, 1979). We also searched for the MLE in cored sediment from the eastern Tyrrhenian Sea (40.1˚ N, 14.7˚ E) where the age of the sediment is believed to be about 32,000 years when comparing the susceptibility in the core with the susceptibility in a nearby one that is dated by palaeomagnetic secular variation records, Carbon-14, and numerous tephra layers in the Tyrrhenian Sea sediment (Iorio et al., 2011). In the Tyrrhenian Sea core, called C1067, closely spaced samples demagnetized in an alternating field to100 mT record a shallowing of positive inclination to 48˚ that is followed by steep positive inclination of 82˚ when declination moves rapidly to the southeast. The old to young path of the VGPs in C1067 forms a narrow counter-clockwise loop that reaches 30.3˚ N, 30.8˚ E and that is centered at about 55˚ N, 15˚ E. Although descending to a latitude that is

  3. Timed colored Petri nets and fuzzy-set-based model for decision making

    NASA Astrophysics Data System (ADS)

    Scopel Simoes, Marcos A.; Barretto, Marcos R. P.

    2000-10-01

    This work proposes the use of Timed Colored Petri nets as a formal base to a decision making tool for applications in industrial productive processes planning and programming. The Timed Colored Petri net is responsible for the transition of states in the decision process, establishing in time the use of resources and of heuristics that correspond to the more important managerial and operational actions for the planning activities and programming of the productive processes of an industrial plant. To negotiate with the uncertainties involved in a decision process, that in general takes care of the responsible specialist's knowledge for the routines involved in the productive system, we make use of the theory of fuzzy sets to suggest decisions logically consistent that obtain a viable solution just leading the viable states of the decision tree, that, in this case, is confused with the occurrence graph of the Petri net. As application example to the proposed model, we used a production system characterized by a port plant, whose model and simulation results are described at the end of this work.

  4. Large amplitude carbon isotope excursion during the Late Silurian Lau Event

    NASA Astrophysics Data System (ADS)

    Schoenmaker, N. R.; Reichart, G. J.; Nierop, K. G. J.; Mann, U.; White, T.; Sancay, R. H.

    2010-05-01

    High magnitude excursions in the stable carbon isotope record reveal that the Silurian greenhouse world (443.7-416.0 Ma) represents a period of globally unstable environmental conditions. Fundamental changes in the global carbon cycle were more frequent and had a larger impact during the Silurian compared to any other period of the Phanerozoic [1]. The late Silurian "Lau event" is the largest of four major positive d13Ccarb excursions. The carbon isotope excursion associated with the "Lau event" is recognized globally and reaches values ranging from +6‰ from the Eastern Baltic, +8.5‰ on Gotland, 11‰ from southern Sweden and even up to 12‰ in Australia, Queensland. This makes the "Lau event" the strongest d13C excursion of the entire Phanerozoic, comparable in amplitude to Precambrian events. However, the mechanism underlying the Silurian stable isotope excursions is ill understood. Scenarios proposed include enhanced carbon burial due to anoxic conditions [2] and/or enhanced productivity [3]. Alternative hypotheses range from alternating wet and humid periods influencing global ocean circulation [4], weathering of carbonates [5] to changes in the primary producer community [6]. Evaluating these different scenarios critically relies on establishing the true magnitude of the isotopic excursions and rates of change. Existing stable carbon isotope studies of the Lau event were based on analyses of bulk carbonates or bulk organic matter. Both signal carriers are subject to admixing of organic matter or carbonates from various sources. Moreover, preferential preservation of some organic moieties, e.g. lipids, over other potentially offsets isotopic records, since the carbon isotopic signatures between these moieties substantially differ. A stable organic geochemical composition over the isotope events is thus crucial to ensure capturing the true amplitude of the excursion. Here we therefore investigate, using Curie point pyrolysis GC-MS, the composition of the

  5. MUTILS - a set of efficient modeling tools for multi-core CPUs implemented in MEX

    NASA Astrophysics Data System (ADS)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2013-04-01

    The need for computational performance is common in scientific applications, and in particular in numerical simulations, where high resolution models require efficient processing of large amounts of data. Especially in the context of geological problems the need to increase the model resolution to resolve physical and geometrical complexities seems to have no limits. Alas, the performance of new generations of CPUs does not improve any longer by simply increasing clock speeds. Current industrial trends are to increase the number of computational cores. As a result, parallel implementations are required in order to fully utilize the potential of new processors, and to study more complex models. We target simulations on small to medium scale shared memory computers: laptops and desktop PCs with ~8 CPU cores and up to tens of GB of memory to high-end servers with ~50 CPU cores and hundereds of GB of memory. In this setting MATLAB is often the environment of choice for scientists that want to implement their own models with little effort. It is a useful general purpose mathematical software package, but due to its versatility some of its functionality is not as efficient as it could be. In particular, the challanges of modern multi-core architectures are not fully addressed. We have developed MILAMIN 2 - an efficient FEM modeling environment written in native MATLAB. Amongst others, MILAMIN provides functions to define model geometry, generate and convert structured and unstructured meshes (also through interfaces to external mesh generators), compute element and system matrices, apply boundary conditions, solve the system of linear equations, address non-linear and transient problems, and perform post-processing. MILAMIN strives to combine the ease of code development and the computational efficiency. Where possible, the code is optimized and/or parallelized within the MATLAB framework. Native MATLAB is augmented with the MUTILS library - a set of MEX functions that

  6. A Structured Microprogram Set for the SUMC Computer to Emulate the IBM System/360, Model 50

    NASA Technical Reports Server (NTRS)

    Gimenez, Cesar R.

    1975-01-01

    Similarities between regular and structured microprogramming were examined. An explanation of machine branching architecture (particularly in the SUMC computer), required for ease of structured microprogram implementation is presented. Implementation of a structured microprogram set in the SUMC to emulate the IBM System/360 is described and a comparison is made between the structured set with a nonstructured set previously written for the SUMC.

  7. Cytotoxicity evaluation of large cyanobacterial strain set using selected human and murine in vitro cell models.

    PubMed

    Hrouzek, Pavel; Kapuścik, Aleksandra; Vacek, Jan; Voráčová, Kateřina; Paichlová, Jindřiška; Kosina, Pavel; Voloshko, Ludmila; Ventura, Stefano; Kopecký, Jiří

    2016-02-01

    The production of cytotoxic molecules interfering with mammalian cells is extensively reported in cyanobacteria. These compounds may have a use in pharmacological applications; however, their potential toxicity needs to be considered. We performed cytotoxicity tests of crude cyanobacterial extracts in six cell models in order to address the frequency of cyanobacterial cytotoxicity to human cells and the level of specificity to a particular cell line. A set of more than 100 cyanobacterial crude extracts isolated from soil habitats (mainly genera Nostoc and Tolypothrix) was tested by MTT test for in vitro toxicity on the hepatic and non-hepatic human cell lines HepG2 and HeLa, and three cell systems of rodent origin: Yac-1, Sp-2 and Balb/c 3T3 fibroblasts. Furthermore, a subset of the extracts was assessed for cytotoxicity against primary cultures of human hepatocytes as a model for evaluating potential hepatotoxicity. Roughly one third of cyanobacterial extracts caused cytotoxic effects (i.e. viability<75%) on human cell lines. Despite the sensitivity differences, high correlation coefficients among the inhibition values were obtained for particular cell systems. This suggests a prevailing general cytotoxic effect of extracts and their constituents. The non-transformed immortalized fibroblasts (Balb/c 3T3) and hepatic cancer line HepG2 exhibited good correlations with primary cultures of human hepatocytes. The presence of cytotoxic fractions in strongly cytotoxic extracts was confirmed by an activity-guided HPLC fractionation, and it was demonstrated that cyanobacterial cytotoxicity is caused by a mixture of components with similar hydrophobic/hydrophilic properties. The data presented here could be used in further research into in vitro testing based on human models for the toxicological monitoring of complex cyanobacterial samples. PMID:26519817

  8. The model of palliative care in the perinatal setting: a review of the literature

    PubMed Central

    2012-01-01

    Background The notion of Palliative Care (PC) in neonatal and perinatal medicine has largely developed in recent decades. Our aim was to systematically review the literature on this topic, summarise the evolution of care and, based on the available data, suggest a current standard for this type of care. Methods Data sources included Medline, the Cochrane Library, CINAHL, and the bibliographies of the papers retrieved. Articles focusing on neonatal/perinatal hospices or PC were included. A qualitative analysis of the content was performed, and data on the lead author, country, year, type of article or design, and direct and indirect subjects were obtained. Results Among the 1558 articles retrieved, we did not find a single quantitative empirical study. To study the evolution of the model of care, we ultimately included 101 studies, most of which were from the USA. Fifty of these were comments/reflections, and only 30 were classifiable as clinical studies (half of these were case reports). The analysis revealed a gradual conceptual evolution of the model, which includes the notions of family-centered care, comprehensive care (including bereavement) and early and integrative care (also including the antenatal period). A subset of 27 articles that made special mention of antenatal aspects showed a similar distribution. In this subset, the results of the four descriptive clinical studies showed that, in the context of specific programmes, a significant number of couples (between 37 and 87%) opted for PC and to continue with the pregnancy when the foetus has been diagnosed with a lethal illness. Conclusions Despite the interest that PC has aroused in perinatal medicine, there are no evidence-based empirical studies to indicate the best model of care for this clinical setting. The very notion of PC has evolved to encompass perinatal PC, which includes, among other things, the idea of comprehensive care, and early and integrative care initiated antenatally. PMID:22409881

  9. Rock Magnetic Cyclostratigraphy of the Edicaran Doushantuo Formation, South China: Determining the Duration of the Shuram Carbon Isotope Excursion

    NASA Astrophysics Data System (ADS)

    Gong, Z.; Kodama, K. P.; Li, Y. X.

    2015-12-01

    To determine the duration of the Shuram carbon-isotope excursion (SE), we conducted paleomagnetic, rock magnetic, and carbon isotopic studies of the Ediacaran Doushantuo Formation at the Dongdahe-Feidatian section near Chengjiang in South China. Zhu et al. (2007)1 indicate that the SE is 97.4 ± 9.5 m thick at this locality. The SE may record the oxidation of the ocean just before the Cambrian explosion. We collected unoriented samples for rock magnetic cyclostratigraphy at 10 cm intervals for 68 m of the Dongdahe section and 101 oriented cores at 2-3 m intervals for paleomagnetism. Comparing our carbon isotope measurements, made on chips from the cores, to Zhu et al.'s previous work shows that the Dongdahe section records 70% of the excursion. The paleomagnetic samples were alternating field and thermally demagnetized, but were totally remagnetized in the present day geomagnetic field (D=358˚, I=38˚). Multi-taper method spectral analysis of the mass-normalized susceptibility of the 600 unoriented samples revealed six strong spectral peaks that rose above the 95% confidence limits of the robust red noise. The stratigraphic thickness of these cycles is 410, 89.3, 32.5, 27.6, 22.1 and 20.9 cm. A smaller peak with a wavelength of 110 cm was also observed. Based on the ratios of these wavelengths we interpret them to be astronomically-forced. If the 410 cm peak is set to long eccentricity (405 kyr), then the other peaks yield near-Milankovitch periods of short eccentricity (109 and 88 kyr), obliquity (32 and 27 kyr), and precession (22 and 21 kyr). A strong peak with a wavelength of 1710 cm was also observed, but is not interpreted to be orbitally-forced. The sediment accumulation rate for the Dongdahe section is 1 cm/kyr making the duration of the SE in South China 9.74 ± 0.95 Myr, in excellent agreement with estimates from Australia and California, thus supporting a primary origin for the SE and possibly the cause of the Cambrian explosion. 1PPP 254 7-61

  10. Modeling Primary Breakup: A Three-Dimensional Eulerian Level Set/Vortex Sheet Method for Two-Phase Interface Dynamics

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    This paper is divided into four parts. First, the level set/vortex sheet method for three-dimensional two-phase interface dynamics is presented. Second, the LSS model for the primary breakup of turbulent liquid jets and sheets is outlined and all terms requiring subgrid modeling are identified. Then, preliminary three-dimensional results of the level set/vortex sheet method are presented and discussed. Finally, conclusions are drawn and an outlook to future work is given.

  11. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  12. A musculoskeletal model of human locomotion driven by a low dimensional set of impulsive excitation primitives.

    PubMed

    Sartori, Massimo; Gizzi, Leonardo; Lloyd, David G; Farina, Dario

    2013-01-01

    Human locomotion has been described as being generated by an impulsive (burst-like) excitation of groups of musculotendon units, with timing dependent on the biomechanical goal of the task. Despite this view being supported by many experimental observations on specific locomotion tasks, it is still unknown if the same impulsive controller (i.e., a low-dimensional set of time-delayed excitastion primitives) can be used as input drive for large musculoskeletal models across different human locomotion tasks. For this purpose, we extracted, with non-negative matrix factorization, five non-negative factors from a large sample of muscle electromyograms in two healthy subjects during four motor tasks. These included walking, running, sidestepping, and crossover cutting maneuvers. The extracted non-negative factors were then averaged and parameterized to obtain task-generic Gaussian-shaped impulsive excitation curves or primitives. These were used to drive a subject-specific musculoskeletal model of the human lower extremity. Results showed that the same set of five impulsive excitation primitives could be used to predict the dynamics of 34 musculotendon units and the resulting hip, knee and ankle joint moments (i.e., NRMSE = 0.18 ± 0.08, and R (2) = 0.73 ± 0.22 across all tasks and subjects) without substantial loss of accuracy with respect to using experimental electromyograms (i.e., NRMSE = 0.16 ± 0.07, and R (2) = 0.78 ± 0.18 across all tasks and subjects). Results support the hypothesis that biomechanically different motor tasks might share similar neuromuscular control strategies. This might have implications in neurorehabilitation technologies such as human-machine interfaces for the torque-driven, proportional control of powered prostheses and orthoses. In this, device control commands (i.e., predicted joint torque) could be derived without direct experimental data but relying on simple parameterized Gaussian-shaped curves, thus decreasing the input drive

  13. Galaxy Evolution Insights from Spectral Modeling of Large Data Sets from the Sloan Digital Sky Survey

    SciTech Connect

    Hoversten, Erik A.

    2007-10-01

    This thesis centers on the use of spectral modeling techniques on data from the Sloan Digital Sky Survey (SDSS) to gain new insights into current questions in galaxy evolution. The SDSS provides a large, uniform, high quality data set which can be exploited in a number of ways. One avenue pursued here is to use the large sample size to measure precisely the mean properties of galaxies of increasingly narrow parameter ranges. The other route taken is to look for rare objects which open up for exploration new areas in galaxy parameter space. The crux of this thesis is revisiting the classical Kennicutt method for inferring the stellar initial mass function (IMF) from the integrated light properties of galaxies. A large data set (~ 105 galaxies) from the SDSS DR4 is combined with more in-depth modeling and quantitative statistical analysis to search for systematic IMF variations as a function of galaxy luminosity. Galaxy Hα equivalent widths are compared to a broadband color index to constrain the IMF. It is found that for the sample as a whole the best fitting IMF power law slope above 0.5 M is Γ = 1.5 ± 0.1 with the error dominated by systematics. Galaxies brighter than around Mr,0.1 = -20 (including galaxies like the Milky Way which has Mr,0.1 ~ -21) are well fit by a universal Γ ~ 1.4 IMF, similar to the classical Salpeter slope, and smooth, exponential star formation histories (SFH). Fainter galaxies prefer steeper IMFs and the quality of the fits reveal that for these galaxies a universal IMF with smooth SFHs is actually a poor assumption. Related projects are also pursued. A targeted photometric search is conducted for strongly lensed Lyman break galaxies (LBG) similar to MS1512-cB58. The evolution of the photometric selection technique is described as are the results of spectroscopic follow-up of the best targets. The serendipitous discovery of two interesting blue compact dwarf galaxies is reported. These

  14. A musculoskeletal model of human locomotion driven by a low dimensional set of impulsive excitation primitives

    PubMed Central

    Sartori, Massimo; Gizzi, Leonardo; Lloyd, David G.; Farina, Dario

    2013-01-01

    Human locomotion has been described as being generated by an impulsive (burst-like) excitation of groups of musculotendon units, with timing dependent on the biomechanical goal of the task. Despite this view being supported by many experimental observations on specific locomotion tasks, it is still unknown if the same impulsive controller (i.e., a low-dimensional set of time-delayed excitastion primitives) can be used as input drive for large musculoskeletal models across different human locomotion tasks. For this purpose, we extracted, with non-negative matrix factorization, five non-negative factors from a large sample of muscle electromyograms in two healthy subjects during four motor tasks. These included walking, running, sidestepping, and crossover cutting maneuvers. The extracted non-negative factors were then averaged and parameterized to obtain task-generic Gaussian-shaped impulsive excitation curves or primitives. These were used to drive a subject-specific musculoskeletal model of the human lower extremity. Results showed that the same set of five impulsive excitation primitives could be used to predict the dynamics of 34 musculotendon units and the resulting hip, knee and ankle joint moments (i.e., NRMSE = 0.18 ± 0.08, and R2 = 0.73 ± 0.22 across all tasks and subjects) without substantial loss of accuracy with respect to using experimental electromyograms (i.e., NRMSE = 0.16 ± 0.07, and R2 = 0.78 ± 0.18 across all tasks and subjects). Results support the hypothesis that biomechanically different motor tasks might share similar neuromuscular control strategies. This might have implications in neurorehabilitation technologies such as human-machine interfaces for the torque-driven, proportional control of powered prostheses and orthoses. In this, device control commands (i.e., predicted joint torque) could be derived without direct experimental data but relying on simple parameterized Gaussian-shaped curves, thus decreasing the input drive complexity

  15. Hierarchical simulation of aquifer heterogeneity: implications of different simulation settings on solute-transport modeling

    NASA Astrophysics Data System (ADS)

    Comunian, Alessandro; De Micheli, Leonardo; Lazzati, Claudio; Felletti, Fabrizio; Giacobbo, Francesca; Giudici, Mauro; Bersezio, Riccardo

    2016-03-01

    The fine-scale heterogeneity of porous media affects the large-scale transport of solutes and contaminants in groundwater and it can be reproduced by means of several geostatistical simulation tools. However, including the available geological information in these tools is often cumbersome. A hierarchical simulation procedure based on a binary tree is proposed and tested on two real-world blocks of alluvial sediments, of a few cubic meters volume, that represent small-scale aquifer analogs. The procedure is implemented using the sequential indicator simulation, but it is so general that it can be adapted to various geostatistical simulation tools, improving their capability to incorporate geological information, i.e., the sedimentological and architectural characterization of heterogeneity. When compared with a standard sequential indicator approach on bi-dimensional simulations, in terms of proportions and connectivity indicators, the proposed procedure yields reliable results, closer to the reference observations. Different ensembles of three-dimensional simulations based on different hierarchical sequences are used to perform numerical experiments of conservative solute transport and to obtain ensembles of equivalent pore velocity and dispersion coefficient at the scale length of the blocks (meter). Their statistics are used to estimate the impact of the variability of the transport properties of the simulated blocks on contaminant transport modeled on bigger domains (hectometer). This is investigated with a one-dimensional transport modeling based on the Kolmogorov-Dmitriev theory of branching stochastic processes. Applying the proposed approach with diverse binary trees and different simulation settings provides a great flexibility, which is revealed by the differences in the breakthrough curves.

  16. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification.

    PubMed

    Zhao, Jing; Lin, Lo-Yi; Lin, Chih-Min

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  17. An environmental data set for vector-borne disease modeling and epidemiology.

    PubMed

    Chabot-Couture, Guillaume; Nigmatulina, Karima; Eckhoff, Philip

    2014-01-01

    Understanding the environmental conditions of disease transmission is important in the study of vector-borne diseases. Low- and middle-income countries bear a significant portion of the disease burden; but data about weather conditions in those countries can be sparse and difficult to reconstruct. Here, we describe methods to assemble high-resolution gridded time series data sets of air temperature, relative humidity, land temperature, and rainfall for such areas; and we test these methods on the island of Madagascar. Air temperature and relative humidity were constructed using statistical interpolation of weather station measurements; the resulting median 95th percentile absolute errors were 2.75°C and 16.6%. Missing pixels from the MODIS11 remote sensing land temperature product were estimated using Fourier decomposition and time-series analysis; thus providing an alternative to the 8-day and 30-day aggregated products. The RFE 2.0 remote sensing rainfall estimator was characterized by comparing it with multiple interpolated rainfall products, and we observed significant differences in temporal and spatial heterogeneity relevant to vector-borne disease modeling. PMID:24755954

  18. Blood transcriptomic markers for major depression: from animal models to clinical settings.

    PubMed

    Redei, Eva E; Mehta, Neha S

    2015-05-01

    Depression is a heterogeneous disorder and, similar to other spectrum disorders, its manifestation varies by age of onset, severity, comorbidity, treatment responsiveness, and other factors. A laboratory blood test based on specific biomarkers for major depressive disorder (MDD) and its subgroups could increase diagnostic accuracy and expedite the initiation of treatment. We identified candidate blood biomarkers by examining genome-wide expression differences in the blood of animal models representing both the genetic and environmental/stress etiologies of depression. Human orthologs of the resulting transcript panel were tested in pilot studies. Transcript abundance of 11 blood markers differentiated adolescent subjects with early-onset MDD from adolescents with no disorder (ND). A set of partly overlapping transcripts distinguished adolescent patients who had comorbid anxiety disorders from those with only MDD. In adults, blood levels of nine transcripts discerned subjects with MDD from ND controls. Even though cognitive behavioral therapy (CBT) resulted in remission of some patients, the levels of three transcripts consistently signaled prior MDD status. A coexpression network of transcripts seems to predict responsiveness to CBT. Thus, our approach can be developed into clinically valid diagnostic panels of blood transcripts for different manifestations of MDD, potentially reducing diagnostic heterogeneity and advancing individualized treatment strategies. PMID:25823952

  19. An Environmental Data Set for Vector-Borne Disease Modeling and Epidemiology

    PubMed Central

    Chabot-Couture, Guillaume; Nigmatulina, Karima; Eckhoff, Philip

    2014-01-01

    Understanding the environmental conditions of disease transmission is important in the study of vector-borne diseases. Low- and middle-income countries bear a significant portion of the disease burden; but data about weather conditions in those countries can be sparse and difficult to reconstruct. Here, we describe methods to assemble high-resolution gridded time series data sets of air temperature, relative humidity, land temperature, and rainfall for such areas; and we test these methods on the island of Madagascar. Air temperature and relative humidity were constructed using statistical interpolation of weather station measurements; the resulting median 95th percentile absolute errors were 2.75°C and 16.6%. Missing pixels from the MODIS11 remote sensing land temperature product were estimated using Fourier decomposition and time-series analysis; thus providing an alternative to the 8-day and 30-day aggregated products. The RFE 2.0 remote sensing rainfall estimator was characterized by comparing it with multiple interpolated rainfall products, and we observed significant differences in temporal and spatial heterogeneity relevant to vector-borne disease modeling. PMID:24755954

  20. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification

    PubMed Central

    Zhao, Jing; Lin, Lo-Yi

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  1. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  2. Comparison of Atmospheric Water Vapor in Observational and Model Data Sets

    SciTech Connect

    Boyle, J.S.

    2000-03-01

    The global water vapor distribution for five observational based data sets and three GCM integrations are compared. The variables considered are the mean and standard deviation values of the precipitable water for the entire atmospheric column and the 500 to 300 hPa layer for January and July. The observationally based sets are the radiosonde data of Ross and Elliott, the ERA and NCEP reanalyses, and the NVAP blend of sonde and satellite data. The three GCM simulations all use the NCAR CCM3 as the atmospheric model. They include: a AMIP type simulation using observed SSTs for the period 1979 to 1993, the NCAR CSM 300 year coupled ocean--atmosphere integration, and a CSM integration with a 1% CO2 increase per year. The observational data exhibit some serious inconsistencies. There are geographical patterns of differences related to interannual variations and national instrument biases. It is clear that the proper characterization of water vapor is somewhat uncertain. Some conclusions about these data appear to be robust even given the discrepancies. The ERA data are too dry especially in the upper levels. The observational data evince much better agreement in the data rich Northern Hemisphere compared to the Southern. Distinct biases are quite pronounced over the Southern Ocean. The mean values and particularly the standard deviations of the three reanalyses are very dependent upon the GCM used as the assimilation vehicle for the analyses. This is made clear by the much enhanced tropical variability in the NCEP/DOE/ AMIP reanalyses compared the initial NCEP/NCAR Reanalysis. The NCAR CCM3 shows consistent evidence of a dry bias. The 1% CO2 experiment shows a very similar pattern of disagreement with the sonde data as the other integrations, once account is taken of the warming trend. No new modes of difference are evident in the 1% CO2 experiment. All the CCM3 runs indicated too much Tropical variability especially in the western Tropical Pacific and Southeast Asia

  3. Improvements Needed in the 40Ar/39Ar Study of Geomagnetic Excursion Chronology

    NASA Astrophysics Data System (ADS)

    Champion, D. E.; Turrin, B. D.

    2015-12-01

    Our knowledge of the existence and frequency of brief geomagnetic polarity. excursions only increases with time. Precise and accurate 40Ar/39Ar ages will be required to document this, because 25 or more excursions may have occurred within the Brunhes Epoch (780ky) separated in time by as little as 10ky. Excursions are and will dominantly be discovered in mafic, low K2O rocks. Improvements in the analytical protocol to 40Ar/39Ar date low K2O, "young", and thus low 40Arrad rocks are required. While conventional K/Ar dating "worked", the assumption of perfect atmospheric equilibration is flawed. In particular, using a measured isochron intercept (±2s) to embrace an atmospheric intercept assumption turns a 40Ar/39Ar diffusive extraction into a series of "K/Ar-lite" experiments. The near ubiquitous excess 40Ar exhibited in final steps of "matrix" or "groundmass" fractions from whole-rock experiments (no glass, crystals) suggests equilibration with the atmosphere is not achieved. Removing magnetic sample splits (glass?) thought subject to poor argon retention, and crystals subject to 40Ar inheritance are routinely done without documenting different isochrons. Short 15 to 20 minute irradiation times effectively eliminate recoil and dramatically minimize isotopic corrections, and the assumption of equivalence in Ar isotope recoil behavior. Assuming no pressure dependency and constancy of mass discrimination value ignores knowledge from other gas mass spectroscopy (O, H, He, Ne). Dynamic mass spectroscopy in stable isotopic analysis allows routine per mil and 0.1 per mil ratios to be measured. Maintaining more than daily bracketing air pipette measurements at differing pressures, and controlling the range of pressures from each diffusive step will approximate this dynamic precision. Experiments will be discussed that exhibit aspects of 40Ar/39Ar dating protocols with which precision and accuracy can be improved.

  4. The Reliability of an Instrumented Device for Measuring Components of the Star Excursion Balance Test

    PubMed Central

    Gorman, Paul P.; Butler, Robert J.; Kiesel, Kyle B.; Underwood, Frank B.; Elkins, Bryant

    2009-01-01

    Background The Star Excursion Balance Test (SEBT) is a dynamic test that requires strength, flexibility, and proprioception and has been used to assess physical performance, identify chronic ankle instability, and identify athletes at greater risk for lower extremity injury. In order to improve the repeatability in measuring components of the SEBT, the Y Balance Test™ has been developed. Objective The purpose of this paper is to report the development and reliability of the Y Balance Test™. Methods Single limb stance excursion distances were measured using the Y Balance Test™ on a sample of 15 male collegiate soccer players. Intraclass Correlation Coefficients (ICC) were used to determine the reliability of the test. Results The ICC for intrarater reliability ranged from 0.85 to 0.91 and for interrater reliability ranged from 0.99 to 1.00. Composite reach score reliability was 0.91 for intrarater and 0.99 for interrater reliability. Discussion This study demonstrated that the Y Balance Test™ has good to excellent intrarater and interrater reliability. The device and protocol attempted to address the common sources of error and method variation in the SEBT including whether touch down is allowed with the reach foot, where the stance foot is aligned, movement allowed of the stance foot, instantaneous measurement of furthest reach distance, standard reach height from the ground, standard testing order, and well defined pass/fail criteria. Conclusion The Y Balance Test™ is a reliable test for measuring single limb stance excursion distances while performing dynamic balance testing in collegiate soccer players. PMID:21509114

  5. Relative fascicle excursion effects on dynamic strength generation during gait in children with cerebral palsy.

    PubMed

    Martín Lorenzo, T; Lerma Lara, S; Martínez-Caballero, I; Rocon, E

    2015-10-01

    Evaluation of muscle structure gives us a better understanding of how muscles contribute to force generation which is significantly altered in children with cerebral palsy (CP). While most muscle structure parameters have shown to be significantly correlated to different expressions of strength development in children with CP and typically developing (TD) children, conflicting results are found for muscle fascicle length. Muscle fascicle length determines muscle excursion and velocity, and contrary to what might be expected, correlations of fascicle length to rate of force development have not been found for children with CP. The lack of correlation between muscle fascicle length and rate of force development in children with CP could be due, on the one hand, to the non-optimal joint position adopted for force generation on the isometric strength tests as compared to the position of TD children. On the other hand, the lack of correlation could be due to the erroneous assumption that muscle fascicle length is representative of sarcomere length. Thus, the relationship between muscle architecture parameters reflecting sarcomere length, such as relative fascicle excursions and dynamic power generation, should be assessed. Understanding of the underlying mechanisms of weakness in children with CP is key for individualized prescription and assessment of muscle-targeted interventions. Findings could imply the detection of children operating on the descending limb of the sarcomere length-tension curve, which in turn might be at greater risk of developing crouch gait. Furthermore, relative muscle fascicle excursions could be used as a predictive variable of outcomes related to crouch gait prevention treatments such as strength training. PMID:26138625

  6. Application of a neptune propulsion concept to a manned mars excursion. Master's thesis

    SciTech Connect

    Finley, C.J.

    1993-04-01

    NEPTUNE is a multimegawatt electric propulsion system. It uses a proven compact nuclear thermal rocket, NERVA, in a closed cycle with a magnetohydrodynamic (MHD) generator to power a magnetoplasmadynamic (MPD) thruster. This thesis defines constraints on an externally sourced propulsion system intended to carry out a manned Martian excursion. It assesses NEPTUNE's ability to conform to these constraints. Because an unmodified NEPTUNE system is too large, the thesis develops modifications to the system which reduce its size. The result is a far less proven, but more useful derivative of the unmodified NEPTUNE system.

  7. Application of a NEPTUNE propulsion concept to a manned Mars excursion

    NASA Astrophysics Data System (ADS)

    Finley, Charles J.

    1993-04-01

    NEPTUNE is a multimegawatt electric propulsion system. It uses a proven compact nuclear thermal rocket, NERVA, in a closed cycle with a magnetohydrodynamic (MHD) generator to power a magnetoplasmadynamic (MPD) thruster. This thesis defines constraints on an externally sourced propulsion system intended to carry out a manned Martian excursion. It assesses NEPTUNE's ability to conform to these constraints. Because an unmodified NEPTUNE system is too large, the thesis develops modifications to the system which reduce its size. The result is a far less proven, but more useful derivative of the unmodified NEPTUNE system.

  8. Modeling and monitoring of tooth fillet crack growth in dynamic simulation of spur gear set

    NASA Astrophysics Data System (ADS)

    Guilbault, Raynald; Lalonde, Sébastien; Thomas, Marc

    2015-05-01

    This study integrates a linear elastic fracture mechanics analysis of the tooth fillet crack propagation into a nonlinear dynamic model of spur gear sets. An original formulation establishes the rigidity of sound and damaged teeth. The formula incorporates the contribution of the flexible gear body and real crack trajectories in the fillet zone. The work also develops a KI prediction formula. A validation of the equation estimates shows that the predicted KI are in close agreement with published numerical and experimental values. The representation also relies on the Paris-Erdogan equation completed with crack closure effects. The analysis considers that during dN fatigue cycles, a harmonic mean of ΔK assures optimal evaluations. The paper evaluates the influence of the mesh frequency distance from the resonances of the system. The obtained results indicate that while the dependence may demonstrate obvious nonlinearities, the crack progression rate increases with a mesh frequency augmentation. The study develops a tooth fillet crack propagation detection procedure based on residual signals (RS) prepared in the frequency domain. The proposed approach accepts any gear conditions as reference signature. The standard deviation and mean values of the RS are evaluated as gear condition descriptors. A trend tracking of their responses obtained from a moving linear regression completes the analysis. Globally, the results show that, regardless of the reference signal, both descriptors are sensitive to the tooth fillet crack and sharply react to tooth breakage. On average, the mean value detected the crack propagation after a size increase of 3.69 percent as compared to the reference condition, whereas the standard deviation required crack progressions of 12.24 percent. Moreover, the mean descriptor shows evolutions closer to the crack size progression.

  9. A generic set of HF antennas for use with spherical model expansions

    NASA Astrophysics Data System (ADS)

    Katal, Nedim

    1990-03-01

    An antenna engineering handbook and database program has been constructed by engineers at the Lawrence Livermore National Laboratory (LLNL) using the Numerical Electromagnetics Code (NEC) antenna modeling program to prepare data performance on tactical field communication antennas used by the Army. It is desirable to have this information installed on a personnel computer (PC), using relational database techniques to select antennas based on performance criteria. This thesis obtains and analyses current distributions and radiation pattern data by using NEC for the following set of four (4) high frequency (HF) tactical generic antennas to be used in future spherical mode expansion work: a quarter wavelength basic whip, a one-wavelength horizontal quad Loop, a 564-foot longwire, and a sloping vee beam dipole. The results of this study show that the basic whip antenna provides good groundwave communication, but it has poor near vertical incident skywave (NVIS) performance. The current distribution has the characteristics of standing waves. The horizontal quad loop antenna is good for night vision imaging systems (NVIS) and medium range skywave communications. The current distribution is sinusoidal and continuous around the loop. The long wire antenna allows short, medium and long range communications and a standing wave current distribution occurs along the antenna axis due to non-termination. The sloping vee beam antenna favors long range communication and the current distribution is mainly that of travelling sinusoidal waves. Because of their well-known efficiency, the basic whip and quad loop can be used as reference standards for the spherical mode expansion work. The longwire and sloping vee beam antenna are unwieldy, but they are effective as base station antennas.

  10. Dynamics of the Earth Magnetic Field during the period of high variability covering the Laschamp and Mono Lake excursions.

    NASA Astrophysics Data System (ADS)

    Laj, Carlo; Guillou, Hervé; Kissel, Catherine

    2014-05-01

    We report on a synthesis of new paleomagnetic data (direction and intensity), conducted together with new K/Ar and 40Ar/39Ar dating over the past few years on 37 lava flows from the Chaîne des Puys (Massif Central, France). New flows emplaced during the Laschamp excursion have been identified and their K/Ar and 40Ar/39Ar dating improves again the precision of the age of this excursion, now established at 41.3 ± 0.6 ka (2sigma). Also, transitional flows corresponding to the Mono Lake excursion have been identified for the first time in this region, widening the geographical expression of this excursion. Absolute intensities obtained from 22 flows out of the 35 studied flows indicate that the intensity of the earth magnetic field is highly reduced, not only during the Laschamp but also during the Mono Lake excursion (to about 10% of the present-day field value). These two well identified and well dated minima, therefore now constitute very precise and accurate tie-points for the chronostratigraphy of this time period. In the 7000 years long interval separating the two excursions, the intensity of the earth magnetic field recovers to almost non-transitional values. This rules out the recent suggestion that a long intensity minimum (6000 years) between the two excursions would have resulted in the extinction of the Neandertal man-kind, via a strong decrease of the atmospheric ozone and an increase in UVB concentration. Not only the amplitude but also the duration of the observed changes are remarkably consistent in the high resolution records obtained from marine sediments, lavas and cosmogenic isotopes from polar ice. They indicate that the duration of the Laschamp can be estimated at about 1500 years based on the intensity drop and to about 640 years based on the directional change. If an excursion is an aborted polarity state as previously suggested, this would imply a duration of only 320 years for a polarity reversal, far shorter than what is invoked in the

  11. A New 62-sample Record of the Mono Lake Excursion Waveform from the Depocenter Sediments of Summer Lake, OR

    NASA Astrophysics Data System (ADS)

    Negrini, R. M.; McCuan, D. T.; Horton, R. A.; Verosub, K. L.

    2011-12-01

    A new core from Summer Lake, Oregon provides the primary datset for a composite, 62-sample record of the Mono Lake Excursion (MLE) waveform. The magnetograms and virtual geomagnetic poles (VGPs) are consistent with those associated with the MLE record from Mono Lake (e.g., Liddicoat and Coe, 1979). The added detail from this new record firmly establishes three distinct VGP clusters centered first on easternmost Asia/Siberia, then on Europe, and, finally, on North America. The jumps between clusters involve typically one sample, which represents only a few decades of time. The excursion is bracketed by tephra of known age (the Mount St. Helens Cy 46.0 ± 6 ka and the Wono 27.3 ± 0.3 14C kyr B.P.) and the age of the excursion is ~28 14C kyr B.P based on an average of five radiocarbon ages from below, within and above the excursion interval. A second waveform that exhibits shallowing inclinations and easterly declination swings upsection is truncated by a prominent unconformity. These PSV features and the associated RPI leading up to this unconformity correlate with those of the onset of the Laschamp Excursion (Lund et al., 2005). Both radiocarbon and PSV correlations support missing sediment from the Summer Lake record between 42.5 and 38 GISP2 ka. This sediment hiatus correlates to unconformities or lowstands in other Great Basin lakes suggesting a Heinrich 4-induced drought that affected much of western North America.

  12. Variable consistency and variable precision models for dominance-based fuzzy rough set analysis of possibilistic information systems

    NASA Astrophysics Data System (ADS)

    Fan, Tuan-Fang; Liau, Churn-Jung; Liu, Duen-Ren

    2013-08-01

    The dominance-based fuzzy rough set approach (DFRSA) is a theoretical framework that can deal with multi-criteria decision analysis of possibilistic information systems. While a set of comprehensive decision rules can be induced from a possibilistic information system by using DFRSA, generation of several intuitively justified rules is sometimes blocked by objects that only partially satisfy the antecedents of the rules. In this paper, we use the variable consistency models and variable precision models of DFRSA to cope with the problem. The models admit rules that are not satisfied by all objects. It is only required that the proportion of objects satisfying the rules must be above a threshold called a consistency level or a precision level. In the presented models, the proportion of objects is represented as a relative cardinality of a fuzzy set with respect to another fuzzy set. We investigate three types of models based on different definitions of fuzzy cardinalities including ? -counts, possibilistic cardinalities, and probabilistic cardinalities; and the consistency levels or precision levels corresponding to the three types of models are, respectively, scalars, fuzzy numbers, and random variables.

  13. A gridded global data set of soil, intact regolith, and sedimentary deposit thicknesses for regional and global land surface modeling

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; Zeng, Xubin; Troch, Peter A.; Niu, Guo-Yue; Williams, Zachary; Brunke, Michael A.; Gochis, David

    2016-03-01

    Earth's terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsec (˜1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. We anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models. This article was corrected on 2 FEB 2016. See the end of the full text for details.

  14. Possible recording of the Mono Lake Excursion in cored sediment from Clear Lake, California

    NASA Astrophysics Data System (ADS)

    Liddicoat, Joseph; Verosub, Kenneth

    2010-05-01

    We report the possible recording of the Mono Lake Excursion (MLE) in cored sediment from Clear Lake, CA. The locality (39.0˚N, 237.3˚E) is about 120 km north of San Francisco, CA, and 320 km northwest of the Mono Basin, CA, where the MLE first was discovered in North America (Denham and Cox, 1971). The field behaviour at Clear Lake that might be the MLE is recorded in clay and peaty clay about 50 cm below the top of the lowermost 80-cm core slug of a 21.6-m core. The coring was done by the wire-line method (Sims and Rymer, 1975) and the samples (rectangular solids 21 mm on a side and 15 mm high) were measured in a cryogenic magnetometer after demagnetization in an alternating field to 35 milliTesla (Verosub, 1977). The continuously-spaced samples record negative inclination of nearly 20˚ and northerly declination when unnormalized relative field intensity was reduced by an order of magnitude from the mean value. Those palaeomagnetic directions are followed immediately by positive inclination to about 50˚ and easterly declination of about 60˚ when the field intensity is at a relative high. That pattern of behaviour is recorded at three localities (Wilson Creek, Mill Creek, and Warm Springs) in the Mono Basin at the MLE (Liddicoat and Coe, 1979; Liddicoat, 1992). A path of the Virtual Geomagnetic Poles (VGPs) at Clear Lake form a clockwise-trending loop that is centered at 65˚N, 20˚E in the hemisphere away from the locality. The VGP that is farthest from the North Geographic Pole is at 29.3˚N, 337.1˚E, which is close to the path formed by the VGPs in the older portion of the MLE (Liddicoat and Coe, 1979; Liddicoat, 1992). The age of the sediment recording the anomalous palaeomagnetic directions in Clear Lake is about 30,000 years B.P. (Verosub, 1977). That age was determined from six (uncalibrated) radiocarbon dates, three of which are from near the base of the core (Sims and Rymer, 1975) where there are the anomalous palaeomagnetic directions, and linear

  15. Modeling Mode Choice Behavior Incorporating Household and Individual Sociodemographics and Travel Attributes Based on Rough Sets Theory

    PubMed Central

    Chen, Xuewu; Wei, Ming; Wu, Jingxian; Hou, Xianyao

    2014-01-01

    Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling. PMID:25431585

  16. Keeping the Purpose in Mind: The Implementation of Instructional Models in Physical Education Settings

    ERIC Educational Resources Information Center

    Gurvitch, Rachel; Metzler, Mike

    2010-01-01

    Models-Based Instruction (MBI) is a comprehensive approach to teaching and learning. In MBI, a teacher becomes familiar with multiple ways (called models) to plan, implement and assess instruction, and then selects the model that can best promote specific kinds of student learning in each unit. By using several models within the same curriculum, a…

  17. Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.

    PubMed

    Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S

    2008-10-01

    Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage. PMID:18635360

  18. Dependence of QSAR models on the selection of trial descriptor sets: a demonstration using nanotoxicity endpoints of decorated nanotubes.

    PubMed

    Shao, Chi-Yu; Chen, Sing-Zuo; Su, Bo-Han; Tseng, Yufeng J; Esposito, Emilio Xavier; Hopfinger, Anton J

    2013-01-28

    Little attention has been given to the selection of trial descriptor sets when designing a QSAR analysis even though a great number of descriptor classes, and often a greater number of descriptors within a given class, are now available. This paper reports an effort to explore interrelationships between QSAR models and descriptor sets. Zhou and co-workers (Zhou et al., Nano Lett. 2008, 8 (3), 859-865) designed, synthesized, and tested a combinatorial library of 80 surface modified, that is decorated, multi-walled carbon nanotubes for their composite nanotoxicity using six endpoints all based on a common 0 to 100 activity scale. Each of the six endpoints for the 29 most nanotoxic decorated nanotubes were incorporated as the training set for this study. The study reported here includes trial descriptor sets for all possible combinations of MOE, VolSurf, and 4D-fingerprints (FP) descriptor classes, as well as including and excluding explicit spatial contributions from the nanotube. Optimized QSAR models were constructed from these multiple trial descriptor sets. It was found that (a) both the form and quality of the best QSAR models for each of the endpoints are distinct and (b) some endpoints are quite dependent upon 4D-FP descriptors of the entire nanotube-decorator complex. However, other endpoints yielded equally good models only using decorator descriptors with and without the decorator-only 4D-FP descriptors. Lastly, and most importantly, the quality, significance, and interpretation of a QSAR model were found to be critically dependent on the trial descriptor sets used within a given QSAR endpoint study. PMID:23252880

  19. Development and Validation of Decision Forest Model for Estrogen Receptor Binding Prediction of Chemicals Using Large Data Sets.

    PubMed

    Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2015-12-21

    Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals. PMID:26524122

  20. A discriminant function model as an alternative method to spirometry for COPD screening in primary care settings in China

    PubMed Central

    Cui, Jiangyu; Zhou, Yumin; Tian, Jia; Wang, Xinwang; Zheng, Jingping; Zhong, Nanshan

    2012-01-01

    Objective COPD is often underdiagnosed in a primary care setting where the spirometry is unavailable. This study was aimed to develop a simple, economical and applicable model for COPD screening in those settings. Methods First we established a discriminant function model based on Bayes’ Rule by stepwise discriminant analysis, using the data from 243 COPD patients and 112 non-COPD subjects from our COPD survey in urban and rural communities and local primary care settings in Guangdong Province, China. We then used this model to discriminate COPD in additional 150 subjects (50 non-COPD and 100 COPD ones) who had been recruited by the same methods as used to have established the model. All participants completed pre- and post-bronchodilator spirometry and questionnaires. COPD was diagnosed according to the Global Initiative for Chronic Obstructive Lung Disease criteria. The sensitivity and specificity of the discriminant function model was assessed. Results The established discriminant function model included nine variables: age, gender, smoking index, body mass index, occupational exposure, living environment, wheezing, cough and dyspnoea. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, accuracy and error rate of the function model to discriminate COPD were 89.00%, 82.00%, 4.94, 0.13, 86.66% and 13.34%, respectively. The accuracy and Kappa value of the function model to predict COPD stages were 70% and 0.61 (95% CI, 0.50 to 0.71). Conclusions This discriminant function model may be used for COPD screening in primary care settings in China as an alternative option instead of spirometry. PMID:23205284

  1. BEST in CLASS: A Classroom-Based Model for Ameliorating Problem Behavior in Early Childhood Settings

    ERIC Educational Resources Information Center

    Vo, Abigail; Sutherland, Kevin S.; Conroy, Maureen A.

    2012-01-01

    As more young children enter school settings to attend early childhood programs, early childhood teachers and school psychologists have been charged with supporting a growing number of young children with chronic problem behaviors that put them at risk for the development of emotional/behavioral disorders (EBDs). There is a need for effective,…

  2. Achieving Student Success in a Regional Public Alternative School Setting through a Consequence-Based Model

    ERIC Educational Resources Information Center

    Burkholder, Sue M.; Merritt, Marti

    2007-01-01

    Genesis Alternative School is a regional, public alternative school setting for middle and high school students from four participating school divisions. It serves 1 rural county school division and 3 small city school divisions. Students are placed at Genesis for disciplinary reasons. Genesis is unique among alternative schools in Virginia…

  3. Best in Class: A Classroom-Based Model for Ameliorating Problem Behavior in Early Childhood Settings

    ERIC Educational Resources Information Center

    Vo, Abigail K.; Sutherland, Kevin S.; Conroy, Maureen A.

    2012-01-01

    As more young children enter school settings to attend early childhood programs, early childhood teachers and school psychologists have been charged with supporting a growing number of young children with chronic problem behaviors that put them at risk for the development of emotional/behavioral disorders (EBDs). There is a need for effective,…

  4. A model of shield-strata interaction and its implications for active shield setting requirements

    SciTech Connect

    Barczak, T.M.; Oyler, D.C.

    1991-12-01

    This book reports that this U.S. Bureau of Mines study evaluates factors that influence longwall support and strata interaction. The longwall system is composed of an immediate and main roof structure and three supporting foundations: longwall structure that is generally supported by all three foundations, while the immediate roof acts as a beam that cantilevers from the coal face to the powered support. In most cases, shield loading involves a complex interaction of both main roof and immediate roof behavior and is a combination of loads produced from convergence of the main roof and displacements of the immediate roof caused by deformations of the cantilevered roof beam. Since the shield stiffness remains constant for all leg pressures and main roof convergence is irresistible in terms of shield capacity, the shield must be able to control the behavior of the immediate roof or floor structure for shield loading to be sensitive to setting pressures. If the goal is to minimize total shield loading, any active setting force must be offset by reduced passive shield loading to justify the active setting loads. Field data suggest that the typical reductions in passive loading do not justify the required increases in setting pressure in some applications.

  5. Outcomes of a Behavioral Education Model for Children with Autism in a Mainstream School Setting

    ERIC Educational Resources Information Center

    Grindle, Corinna F.; Hastings, Richard P.; Saville, Maria; Hughes, J. Carl; Huxley, Kathleen; Kovshoff, Hanna; Griffith, Gemma M.; Walker-Jones, Elin; Devonshire, Katherine; Remington, Bob

    2012-01-01

    The authors report 1-year outcomes for 11 children (3-7 years) with autism who attended an "Applied Behavior Analysis (ABA) classroom" educational intervention in a mainstream school setting. The children learned new skills by the end of 1 year and learned additional skills during a 2nd year. Group analysis of standardized test outcomes (IQ and…

  6. Valuing the Adult Learner in E-Learning: A Conceptual Model for Corporate Settings

    ERIC Educational Resources Information Center

    Waight, Consuelo L.; Stewart, Barbara

    2005-01-01

    The framework describes that e-Learning engagement, learning and transfer within corporate settings can possibly be achieved if antecedents such as needs assessment, learner analysis, for example, and moderators such as return on investment, learning theories, for example, are adhered. The realization of antecedents and moderators, however, are…

  7. MODELING STREAMFLOW USING SWAT WITH DIFFERENT SOIL AND LAND COVER GEOSPATIAL DATA SETS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The integration of geographical information systems (GIS) and hydrologic models provides the user the ability to simulate watershed scale processes within a spatially digitized computer based environment. Such model simulations have become increasingly popular within the scientific community for sev...

  8. Numerical predictions of the thermal behaviour and resultant effects of grouting cements while setting prosthetic components in bone.

    PubMed

    Quarini, G L; Learmonth, I D; Gheduzzi, S

    2006-07-01

    Acrylic cements are commonly used to attach prosthetic components in joint replacement surgery. The cements set in short periods of time by a complex polymerization of initially liquid monomer compounds into solid structures with accompanying significant heat release. Two main problems arise from this form of fixation: the first is the potential damage caused by the temperature excursion, and the second is incomplete reaction leaving active monomer compounds, which can potentially be slowly released into the patient. This paper presents a numerical model predicting the temperature-time history in an idealized prosthetic-cement-bone system. Using polymerization kinetics equations from the literature, the degree of polymerization is predicted, which is found to be very dependent on the thermal history of the setting process. Using medical literature, predictions for the degree of thermal bone necrosis are also made. The model is used to identify the critical parameters controlling thermal and unreacted monomer distributions. PMID:16898219

  9. Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.

    2011-01-01

    A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…

  10. Estimation of general and set steps of roof caving in rock mass with excavations at mining. Numerical modelling.

    NASA Astrophysics Data System (ADS)

    Eremin, M.; Makarov, P.

    2016-04-01

    The results of 2D modelling of rock mass elements fracture are shown in the article. The results of modelling are in a good agreement with empirical and theoretical estimations of roof caving steps for the flat-dipping coal seams when the horizons are not so deep (less than 600 m). The estimations of the general and set steps of roof caving are given for the lava conditions at different lengths of the main roof containing sandstone.

  11. Mass Flux Stability in the Presence of Temperature Excursions and Perturbations in Solid ^3 He-^4 He Mixtures

    NASA Astrophysics Data System (ADS)

    Vekhov, Ye.; Hallock, R. B.

    2016-03-01

    The DC superfluid ^4 He mass flux through a cell filled with solid ^4 He diluted by ppm amounts of ^3 He is susceptible to flux changes when perturbations of the solid sample are imposed. We report on the details of the reproducibility of the flux following excursions in temperature and cryostat helium transfer-induced apparatus vibration, particularly including excursions to temperatures above which the flux is immeasurably small. And we report on behavior following an annealing, partial melting, and re-freezing of the sample at temperatures and pressures close to and on the melting curve.

  12. PR-Set7 is degraded in a conditional Cul4A transgenic mouse model of lung cancer

    DOE PAGESBeta

    Wang, Yang; Xu, Zhidong; Mao, Jian -Hua; Hsieh, David; Au, Alfred; Jablons, David M.; Li, Hui; You, Lian

    2015-06-01

    Background and objective. Maintenance of genomic integrity is essential to ensure normal organismal development and to prevent diseases such as cancer. PR-Set7 (also known as Set8) is a cell cycle regulated enzyme that catalyses monomethylation of histone 4 at Lys20 (H4K20me1) to promote chromosome condensation and prevent DNA damage. Recent studies show that CRL4CDT2-mediated ubiquitylation of PR-Set7 leads to its degradation during S phase and after DNA damage. This might occur to ensure appropriate changes in chromosome structure during the cell cycle or to preserve genome integrity after DNA damage. Methods. We developed a new model of lung tumor developmentmore » in mice harboring a conditionally expressed allele of Cul4A. We have therefore used a mouse model to demonstrate for the first time that Cul4A is oncogenic in vivo. With this model, staining of PR-Set7 in the preneoplastic and tumor lesions in AdenoCre-induced mouse lungs was performed. Meanwhile we identified higher protein level changes of γ-tubulin and pericentrin by IHC. Results. The level of PR-Set7 down-regulated in the preneoplastic and adenocarcinomous lesions following over-expression of Cul4A. We also identified higher levels of the proteins pericentrin and γ-tubulin in Cul4A mouse lungs induced by AdenoCre. Conclusion. PR-Set7 is a direct target of Cul4A for degradation and involved in the formation of lung tumors in the conditional Cul4A transgenic mouse model.« less

  13. Simulation of Heterogeneous Atom Probe Tip Shapes Evolution during Field Evaporation Using a Level Set Method and Different Evaporation Models

    SciTech Connect

    Xu, Zhijie; Li, Dongsheng; Xu, Wei; Devaraj, Arun; Colby, Robert J.; Thevuthasan, Suntharampillai; Geiser, B. P.; Larson, David J.

    2015-04-01

    In atom probe tomography (APT), accurate reconstruction of the spatial positions of field evaporated ions from measured detector patterns depends upon a correct understanding of the dynamic tip shape evolution and evaporation laws of component atoms. Artifacts in APT reconstructions of heterogeneous materials can be attributed to the assumption of homogeneous evaporation of all the elements in the material in addition to the assumption of a steady state hemispherical dynamic tip shape evolution. A level set method based specimen shape evolution model is developed in this study to simulate the evaporation of synthetic layered-structured APT tips. The simulation results of the shape evolution by the level set model qualitatively agree with the finite element method and the literature data using the finite difference method. The asymmetric evolving shape predicted by the level set model demonstrates the complex evaporation behavior of heterogeneous tip and the interface curvature can potentially lead to the artifacts in the APT reconstruction of such materials. Compared with other APT simulation methods, the new method provides smoother interface representation with the aid of the intrinsic sub-grid accuracy. Two evaporation models (linear and exponential evaporation laws) are implemented in the level set simulations and the effect of evaporation laws on the tip shape evolution is also presented.

  14. Testing Interdisciplinary Models of Dialogue Settings for Improving the Effectiveness of Supervisor's Verbal Behaviors in a Supervisory Conference.

    ERIC Educational Resources Information Center

    Michalak, Daniel A.

    This document reports on a study that (1) investigated and tested interdisciplinary models of dialogue settings akin to the supervisory conference in student teaching, and (2) gathered information about the verbal behavior of university supervisors and supervising teachers during a student teaching conference. Data gathered from 10 pairs of…

  15. A Model of Screening and Goal-Setting in Short-Term Counseling with Sexual Abuse Survivors.

    ERIC Educational Resources Information Center

    Lese, Karen P.

    Although plentiful information is available about the long-term treatment of sexual abuse survivors, a framework for the short-term treatment of this population is lacking in the literature. A preliminary model of screening and goal-setting in short-term therapy for survivors, to be used at university and college counseling centers, is presented…

  16. Possible Recording of the Hilina Pali Excursion in the Mono Basin, California

    NASA Astrophysics Data System (ADS)

    Coe, R.; Liddicoat, J.

    2012-04-01

    Inclination of about negative 40˚ in basalt from Kilauea volcano, Hawaii (Teanby et al., 2002), that is assigned an age of about 18,000 radiocarbon years (uncorrected)(Coe et al., 1978, after Rubin and Berthold, 1961) and an excursion in northeastern China at Changbaishan Volcano of similar age from Ar40/Ar39 dates (Singer et al., 2011) that was interpreted to be the Blake Subchron (Zhu et al., 2000) using K/Ar (Liu, 1987) and Ar40/39 dates (Lin, 1999), might be recorded as shallow positive inclination in lacustrine siltstone in the bank of Wilson Creek in the Mono Basin, CA. The siltstone was deposited in Pleistocene Lake Russell, of which Mono Lake is the remnant, and was exposed when Wilson Creek was incised as the shoreline of Mono Lake receded (Lajoie, 1968). Basaltic and rhyolitic volcanic ash layers exposed in the bank of the creek are stratigraphic markers that have been important for studies of the Mono Lake Excursion (Denham and Cox, 1971; Liddicoat and Coe, 1979; Liddicoat, 1992; Coe and Liddicoat, 1994) and Pleistocene climate in the U.S. Great Basin (Zimmerman et al., 2006). Those ash layers likewise are useful for locating paleomagnetic directions along strike that might be the negative inclination in Hawaii named the Hilina Pali Excursion (Teanby et al., 2002). The portion of the lacustine section exposed along Wilson Creek that is of interest records waveform Delta in Lund et al. (1988) in Subunit E of Lajoie (1993) that is bracketed by ash layers 12 and 13; in Lajoie (1968), those ash layers are numbered 8 and 7, respectively. About midway in Subunit E, which has a thickness of 1.1 m, the inclination is about 15˚ in four back-to-back horizons that span 8 cm. The subsamples, each 2 cm thick, were treated by either alternating field or thermal demagnetization. The Virtual Geomagnetic Pole (VGP) for the horizon with the shallowest inclination (14.9˚) is 53.8˚ N, 22.7˚ E (n = 6, Alpha-95 = 2.3˚), and the VGPs within waveform Delta when followed

  17. A minimal set of invariants as a systematic approach to higher order gravity models: physical and cosmological constraints

    SciTech Connect

    Moldenhauer, Jacob; Ishak, Mustapha E-mail: mishak@utdallas.edu

    2009-12-01

    We compare higher order gravity models to observational constraints from magnitude -redshift supernova data, distance to the last scattering surface of the CMB, and Baryon Acoustic Oscillations. We follow a recently proposed systematic approach to higher order gravity models based on minimal sets of curvature invariants, and select models that pass some physical acceptability conditions (free of ghost instabilities, real and positive propagation speeds, and free of separatrices). Models that satisfy these physical and observational constraints are found in this analysis and do provide fits to the data that are very close to those of the LCDM concordance model. However, we find in that the limitation of the models considered here comes from the presence of superluminal mode propagations for the constrained parameter space of the models.

  18. The search for stable prognostic models in multiple imputed data sets

    PubMed Central

    2010-01-01

    Background In prognostic studies model instability and missing data can be troubling factors. Proposed methods for handling these situations are bootstrapping (B) and Multiple imputation (MI). The authors examined the influence of these methods on model composition. Methods Models were constructed using a cohort of 587 patients consulting between January 2001 and January 2003 with a shoulder problem in general practice in the Netherlands (the Dutch Shoulder Study). Outcome measures were persistent shoulder disability and persistent shoulder pain. Potential predictors included socio-demographic variables, characteristics of the pain problem, physical activity and psychosocial factors. Model composition and performance (calibration and discrimination) were assessed for models using a complete case analysis, MI, bootstrapping or both MI and bootstrapping. Results Results showed that model composition varied between models as a result of how missing data was handled and that bootstrapping provided additional information on the stability of the selected prognostic model. Conclusion In prognostic modeling missing data needs to be handled by MI and bootstrap model selection is advised in order to provide information on model stability. PMID:20846460

  19. A new adaptation of linear reservoir models in parallel sets to assess actual hydrological events

    NASA Astrophysics Data System (ADS)

    Mateo Lázaro, Jesús; Sánchez Navarro, José Ángel; García Gil, Alejandro; Edo Romero, Vanesa

    2015-05-01

    A methodology based on Parallel Linear Reservoir (PLR) models is presented. To carry it out has been implemented within the software SHEE (Simulation of Hydrological Extreme Events), which is a tool for the analysis of hydrological processes in catchments with the management and display of DEM and datasets. The algorithms of the models pass throughout the cells and drainage network, by means of the Watershed Traversal Algorithm (WTA) that runs the entire drainage network of a basin in both directions, upwards and downwards, which is ideal for incorporating the models of the hydrological processes of the basins into its structure. The WTA methodology is combined with another one based on models of Parallel Linear Reservoirs (PLR) whose main qualities include: (1) the models are defined by observing the recession curves of actual hydrographs, i.e., the watershed actual responses; (2) the models serve as a way to simulate the routing through the watershed and its different reservoirs; and (3) the models allow calculating the water balance, which is essential to the study of actual events in the watershed. A complete hydrometeorological event needs the combination of several models, each one of which represents a hydrological process. The PLR model is a routing model, but it also contributes to the adjustment of other models (e.g., the rainfall-runoff model) and allows establishing a distributed model of effective rainfall for an actual event occurred in a basin. On the other hand, the proposed formulation solves the rainfall distribution problem for each deposit in the reservoir combination models.

  20. Simulation skill of APCC set of global climate models for Asian summer monsoon rainfall variability

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Singh, G. P.; Singh, Vikas

    2015-04-01

    The performance of 11 Asia-Pacific Economic Cooperation Climate Center (APCC) global climate models (coupled and uncoupled both) in simulating the seasonal summer (June-August) monsoon rainfall variability over Asia (especially over India and East Asia) has been evaluated in detail using hind-cast data (3 months advance) generated from APCC which provides the regional climate information product services based on multi-model ensemble dynamical seasonal prediction systems. The skill of each global climate model over Asia was tested separately in detail for the period of 21 years (1983-2003), and simulated Asian summer monsoon rainfall (ASMR) has been verified using various statistical measures for Indian and East Asian land masses separately. The analysis found a large variation in spatial ASMR simulated with uncoupled model compared to coupled models (like Predictive Ocean Atmosphere Model for Australia, National Centers for Environmental Prediction and Japan Meteorological Agency). The simulated ASMR in coupled model was closer to Climate Prediction Centre Merged Analysis of Precipitation (CMAP) compared to uncoupled models although the amount of ASMR was underestimated in both models. Analysis also found a high spread in simulated ASMR among the ensemble members (suggesting that the model's performance is highly dependent on its initial conditions). The correlation analysis between sea surface temperature (SST) and ASMR shows that that the coupled models are strongly associated with ASMR compared to the uncoupled models (suggesting that air-sea interaction is well cared in coupled models). The analysis of rainfall using various statistical measures suggests that the multi-model ensemble (MME) performed better compared to individual model and also separate study indicate that Indian and East Asian land masses are more useful compared to Asia monsoon rainfall as a whole. The results of various statistical measures like skill of multi-model ensemble, large spread

  1. Performance of a Mathematical Model to Forecast Lives Saved from HIV Treatment Expansion in Resource-Limited Settings

    PubMed Central

    Kimmel, April D.; Fitzgerald, Daniel W.; Pape, Jean W.; Schackman, Bruce R.

    2014-01-01

    Background International guidelines recommend HIV treatment expansion in resource-limited settings, but funding availability is uncertain. We evaluated performance of a model that forecasts lives saved through continued HIV treatment expansion in Haiti. Methods We developed a computer-based, mathematical model of HIV disease and used incidence density analysis of patient-level Haitian data to derive model parameters for HIV disease progression. We assessed internal validity of model predictions and internally calibrated model inputs when model predictions did not fit the patient-level data. We then derived uncertain model inputs related to diagnosis and linkage to care, pre-treatment retention, and enrollment on HIV treatment through an external calibration process that selected input values by comparing model predictions to Haitian population-level data. Model performance was measured by fit to event-free survival (patient-level) and number receiving HIV treatment over time (population-level). Results For a cohort of newly HIV-infected individuals with no access to HIV treatment, the model predicts median AIDS-free survival of 9.0 years pre-calibration and 6.6 years post-calibration versus 5.8 years (95% CI 5.1, 7.0) from the patient-level data. After internal validation and calibration, 16 of 17 event-free survival measures (94%) had a mean percentage deviation between model predictions and the empiric data of <6%. After external calibration, the percentage deviation between model predictions and population-level data on the number on HIV treatment was <1% over time. Conclusions Validation and calibration resulted in a good-fitting model appropriate for health policy decision making. Using local data in a policy model-building process is feasible in resource-limited settings. PMID:25331914

  2. A Stochastic Flowering Model Describing an Asynchronically Flowering Set of Trees

    PubMed Central

    NORMAND, F.; HABIB, R.; CHADŒUF, J.

    2002-01-01

    A general stochastic model is presented that simulates the time course of flowering of individual trees and populations, integrating the synchronization of flowering both between and within trees. Making some hypotheses, a simplified expression of the model, called the ‘shoot’ model, is proposed, in which the synchronization of flowering both between and within trees is characterized by specific parameters. Two derived models, the ‘tree’ model and the ‘population’ model, are presented. They neglect the asynchrony of flowering, respectively, within trees, and between and within trees. Models were fitted and tested using data on flowering of Psidium cattleianum observed at study sites at elevations of 200, 520 and 890 m in Réunion Island. The ‘shoot’ model fitted the data best and reproduced the strong irregularities in flowering shown by empirical data. The asynchrony of flowering in P. cattleianum was more pronounced within than between trees. Simulations showed that various flowering patterns can be reproduced by the ‘shoot’ model. The use of different levels of organization of the general model is discussed. PMID:12234153

  3. Robust set-point regulation for ecological models with multiple management goals.

    PubMed

    Guiver, Chris; Mueller, Markus; Hodgson, Dave; Townley, Stuart

    2016-05-01

    Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal. PMID:26242360

  4. Application Description and Policy Model in Collaborative Environment for Sharing of Information on Epidemiological and Clinical Research Data Sets

    PubMed Central

    de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo

    2010-01-01

    Background Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. Methodology The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. Principal Findings The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Conclusions Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating

  5. A comparison of two finite element models of tidal hydrodynamics using a North Sea data set

    USGS Publications Warehouse

    Walters, R.A.; Werner, F.E.

    1989-01-01

    Using the region of the English Channel and the southern bight of the North Sea, we systematically compare the results of two independent finite element models of tidal hydrodynamics. The model intercomparison provides a means for increasing our understanding of the relevant physical processes in the region in question as well as a means for the evaluation of certain algorithmic procedures of the two models. ?? 1989.

  6. Depriming of arterial heat pipes: An investigation of CTS thermal excursions

    NASA Technical Reports Server (NTRS)

    Antoniuk, D.; Edwards, D. K.

    1980-01-01

    Four thermal excursions of the Transmitter Experiment Package (TEP) were the result of the depriming of the arteries in all three heat pipes in the Variable Conductance Heat Pipe System which cooled the TEP. The determined cause of the depriming of the heat pipes was the formation of bubbles of the nitrogen/helium control gas mixture in the arteries during the thaw portion of a freeze/thaw cycle of the inactive region of the condenser section of the heat pipe. Conditions such as suction freezeout or heat pipe turn-on, which moved these bubbles into the active region of the heat pipe, contributed to the depriming mechanism. Methods for precluding, or reducing the probability of, this type of failure mechanism in future applications of arterial heat pipes are included.

  7. Depriming of arterial heat pipes: An investigation of CTS thermal excursions

    NASA Astrophysics Data System (ADS)

    Antoniuk, D.; Edwards, D. K.

    1980-08-01

    Four thermal excursions of the Transmitter Experiment Package (TEP) were the result of the depriming of the arteries in all three heat pipes in the Variable Conductance Heat Pipe System which cooled the TEP. The determined cause of the depriming of the heat pipes was the formation of bubbles of the nitrogen/helium control gas mixture in the arteries during the thaw portion of a freeze/thaw cycle of the inactive region of the condenser section of the heat pipe. Conditions such as suction freezeout or heat pipe turn-on, which moved these bubbles into the active region of the heat pipe, contributed to the depriming mechanism. Methods for precluding, or reducing the probability of, this type of failure mechanism in future applications of arterial heat pipes are included.

  8. Five scientists on excursion — a picture of marine biology on Helgoland before 1892

    NASA Astrophysics Data System (ADS)

    Zissler, D.

    1995-03-01

    Five scientists on excursion — a picture of marine biology on Helgoland before 1892. The picture, of which several variant poses with minor differences exist, is a photograph taken on Helgoland in September, 1865. The original is to be found in the collections of the Ernst-Haeckel-Haus in Jena. The photograph shows only a few objects and fewer persons, but they are arranged like a bouquet: in front, collecting vessels; behind, grouped around a table, five scientists, Dohrn, Greeff, Haeckel, Salverda, Marchi. They hold up their catching nets like insignia, identifying their basic activity. This photograph is a unique document for the marine biological research on Helgoland before 1892. Furthermore, it illustrates a time and place for the birth of the idea of establishing the world's most famous marine biological station, the Stazione Zoologica di Napoli.

  9. Possible Recording of the Hilina Pali Excursion in Cored Tyrrhenian Sea Sediment

    NASA Astrophysics Data System (ADS)

    Iorio, Marina; Liddicoat, Joseph; Sagnotti, Leonardo; Incoronato, Alberto; de Anteriis, Giovanni; Insinga, Donatella; Angelino, Antimo

    2013-04-01

    First encountered in marine sediment cored from the Gulf of Mexico (19.5˚ N, 267.0˚ E)(Clark and Kennett, 1973), the Hilina Pali Excursion (HPE) is named for a locality in Hawaii (19.5˚ N, 205.0˚ E) where inclination of about negative 40˚ is documented in cored basalt (Teanby et al., 2002). Prior to naming the excursion, Coe et al. (1978) also found shallow inclination in basalt from Kilauea Volcano (19.2˚ N, 204.7˚ E) that is dated at about 18,000 yrs B.P. (uncorrected Carbon-14, Rubin and Berthold, 1961) - the age now assigned to the HPE - and was erupted when the field intensity was reduced to nearly half the present intensity. More recently, the HPE was located at Changbaishan Volcano in northeastern China (40.2˚ N, 128.0˚ E) where the age is established by Ar40/Ar39 dates (Singer et al., 2011). In exposed lake sediments in the Mono Basin, CA (38.0˚ N, 240.8˚ E), shallow positive inclination at about 18,000 yrs B.P. might also be the HPE. In the Mono Basin, normalized (NRM/ARM) intensity is reduced at that time (Zimmerman et al., 2006), and the Virtual Geomagnetic Poles (VGPs) during the reduced intensity form a clockwise trending loop when followed from old to young that descends to 53.8˚ N, 22.7˚ E (n = 6, Alpha-95 = 2.3˚) and is centered at about 50˚ N, 30˚ E (Coe and Liddicoat, 2012). There is a possible excursion of the palaeomagnetic field recorded in marine sediment at a locality in the Tyrrhenian Sea about 25 km south of Ischia (40.5˚ N, 13.7˚ E). The excursion is in sediment from two core segments that span about 22,000-18,000 yrs B.P. (de Alteriis et al., 2010) and occurs as reduced positive inclination (about 50˚) at about 20,000 yrs B.P. that increases to about 80˚ at about 18,000 yrs B.P. when declination changes from west to east. This pattern of field behaviour is similar to the behaviour of the possible HPE in the Mono Basin (Coe and Liddicoat, 2012) and in sediment cored from Lac du Bouchet, FR (44.9˚ N, 3.8˚ E) that is

  10. Search for the standard model Higgs Boson produced in association with top quarks using the full CDF data set.

    PubMed

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calamba, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chung, W H; Chung, Y S; Ciocci, M A; Clark, A; Clarke, C; Compostella, G; Connors, J; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; Dell'Orso, M; Demortier, L; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, S; Ershaidat, N; Eusebi, R; Farrington, S; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kim, Y J; Kimura, N; Kirby, M; Klimenko, S; Knoepfel, K; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lin, C-J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, H; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Maestro, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Mastrandrea, P; Matera, K; Mattson, M E; Mazzacane, A; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Noh, S Y; Norniella, O; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Poprocki, S; Potamianos, K; Prokoshin, F; Pranko, A; Ptohos, F; Punzi, G; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Riddick, T; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shochet, M; Shreyber-Tecker, I; Simonenko, A; Sinervo, P; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Sorin, V; Song, H; Squillacioti, P; Stancari, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Wester, W C; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamato, D; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhou, C; Zucchelli, S

    2012-11-01

    A search is presented for the standard model Higgs boson produced in association with top quarks using the full Run II proton-antiproton collision data set, corresponding to 9.45 fb(-1), collected by the Collider Detector at Fermilab. No significant excess over the expected background is observed, and 95% credibility-level upper bounds are placed on the cross section σ(ttH → lepton + missing transverse energy+jets). For a Higgs boson mass of 125 GeV/c(2), we expect to set a limit of 12.6 and observe a limit of 20.5 times the standard model rate. This represents the most sensitive search for a standard model Higgs boson in this channel to date. PMID:23215271

  11. Search for the standard model Higgs boson produced in association with top quarks using the full CDF data set

    SciTech Connect

    Aaltonen, T.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.A.; Arisawa, T.; Artikov, A.; /Dubna, JINR /Texas A-M

    2012-08-01

    A search is presented for the standard model Higgs boson produced in association with top quarks using the full Run II proton-antiproton collision data set, corresponding to 9.45 fb{sup -1}, collected by the Collider Detector at Fermilab. No significant excess over the expected background is observed, and 95% credibility-level upper bounds are placed on the cross section {sigma}(t{bar t}H {yields} lepton + missing transverse energy + jets). For a Higgs boson mass of 125 GeV/c{sup 2}, we expect to set a limit of 12.6, and observe a limit of 20.5 times the standard model rate. This represents the most sensitive search for a standard model Higgs boson in this channel to date.

  12. Speleothems Recording Geomagnetic Excursions: a Case Study from Cobre Cave in Northern Spain

    NASA Astrophysics Data System (ADS)

    Pavon-Carrasco, F.; Osete, M. L.; Martin-chivelet, J.; Egli, R.; Rossi, C.; Muñoz-García, B.; Heller, F.

    2013-05-01

    Calcite speleothems, such as stalagmites and flowstones, have an enormous potential in palaeomagnetism, since they may grow continuously through thousands of years, the lock-in of remanent magnetisation is nearly instantaneous and ages of speleothems can be determined using high precision U-series radiometric dating techniques. However, the typically very low concentration of ferromagnetic minerals resulting in very weak natural remanent magnetisation (NRM) has limited their usage. In addition, secondary processes that could affect magnetization are poorly understood. Here we show results from a stalagmite from northern Spain (Cobre Cave) that recorded the Blake geomagnetic excursion. Two types of samples exhibiting different magnetic properties are observed. Isothermal remanent magnetisation (IRM) experiments indicate major contributions from low coercivity minerals in all samples. In white samples only ferrimagnetic minerals are detected whereas in light-brown samples variable amounts of high coercivity minerals can also be observed. The low coercivity IRM is thermally demagnetized at 550°C indicating the presence of magnetite. Maximum unblocking temperatures over 550°C of the high coercivity component suggest the additional presence of haematite in light-brown samples. Upon demagnetisation, all samples exhibited a directionally stable low-coercivity/low-unblocking temperature component that is considered as the characteristic remanent magnetisation (ChRM) carried by fine magnetite. The ChRM exhibited normal and reversed directions recording the Blake Geomagnetic Excursion which could be radiometrically dated between 116.5 ± 0.7 kyr BP and 112.0 ± 1.9 kyr BP. The second component carried by haematite has directions being always close to the present day field direction and is considered as a secondary component. Reliability of relative paleo-intensity (RPI) determinations is discussed.

  13. Two Models for Implementing Senior Mentor Programs in Academic Medical Settings

    ERIC Educational Resources Information Center

    Corwin, Sara J.; Bates, Tovah; Cohan, Mary; Bragg, Dawn S.; Roberts, Ellen

    2007-01-01

    This paper compares two models of undergraduate geriatric medical education utilizing senior mentoring programs. Descriptive, comparative multiple-case study was employed analyzing program documents, archival records, and focus group data. Themes were compared for similarities and differences between the two program models. Findings indicate that…

  14. TPDK, a New Definition of the TPACK Model for a University Setting

    ERIC Educational Resources Information Center

    Bachy, Sylviane

    2014-01-01

    In this paper we propose a new Technopedagogical Disciplinary Knowledge model. This model integrates four separate dimensions, which we use to measure a teacher's effectiveness. These are the individual teacher's discipline (D), personal epistemology (E), pedagogical knowledge (P), and knowledge of technology (T). We also acknowledge the…

  15. An Evidence-Based Practice Model across the Academic and Clinical Settings

    ERIC Educational Resources Information Center

    Wolter, Julie A.; Corbin-Lewis, Kim; Self, Trisha; Elsweiler, Anne

    2011-01-01

    This tutorial is designed to provide academic communication sciences and disorders (CSD) programs, at both the undergraduate and graduate levels, with a comprehensive instructional model on evidence-based practice (EBP). The model was designed to help students view EBP as an ongoing process needed in all clinical decision making. The three facets…

  16. A Digital Tool Set for Systematic Model Design in Process-Engineering Education

    ERIC Educational Resources Information Center

    van der Schaaf, Hylke; Tramper, Johannes; Hartog, Rob J.M.; Vermue, Marian

    2006-01-01

    One of the objectives of the process technology curriculum at Wageningen University is that students learn how to design mathematical models in the context of process engineering, using a systematic problem analysis approach. Students find it difficult to learn to design a model and little material exists to meet this learning objective. For these…

  17. A Model for Describing, Analysing and Investigating Cultural Understanding in EFL Reading Settings

    ERIC Educational Resources Information Center

    Porto, Melina

    2013-01-01

    This article describes a model used to explore cultural understanding in English as a foreign language reading in a developing country, namely Argentina. The model is designed to investigate, analyse and describe EFL readers' processes of cultural understanding in a specific context. Cultural understanding in reading is typically investigated…

  18. A Model for Gathering Stakeholder Input for Setting Research Priorities at the Land-Grant University.

    ERIC Educational Resources Information Center

    Kelsey, Kathleen Dodge; Pense, Seburn L.

    2001-01-01

    A model for collecting and using stakeholder input on research priorities is a modification of Guba and Lincoln's model, involving preevaluation preparation, stakeholder identification, information gathering and analysis, interpretive filtering, and negotiation and consensus. A case study at Oklahoma State University illustrates its applicability…

  19. Models for Building Knowledge in a Technology-Rich Setting: Teacher Education

    ERIC Educational Resources Information Center

    MacKinnon, Gregory R.; Aylward, M. Lynn

    2009-01-01

    Technology offers promising opportunities for creating new types of classroom learning environments. This paper describes three technology models used by teacher education interns: electronic portfolios, negotiative concept mapping, cognote-supported electronic discussions. As implemented in the current study, these models invoke graduated…

  20. Questioning the Effectiveness of Behavior Modeling Training in an Industrial Setting.

    ERIC Educational Resources Information Center

    Russell, James S.; And Others

    1984-01-01

    Investigated the impact of behavior modeling training in an industrial plant on male supervisors (N=44) using Kirkpatrick's (1976) four levels of evaluation. Results indicated no behavior or performance change with behavior modeling and re-emphasized the need to use Kirkpatrick's evaluation method to measure training program effectiveness. (LLL)

  1. Does Rational Selection of Training and Test Sets Improve the Outcome of QSAR Modeling?

    EPA Science Inventory

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external dataset, the best way to validate the predictive ability of a model is to perform its s...

  2. A Practical Skills Model for Effectively Engaging Clients in Multicultural Settings

    ERIC Educational Resources Information Center

    Alberta, Anthony J.; Wood, Anita H.

    2009-01-01

    The Practical Skills Model of Multicultural Engagement represents an attempt to create a means for moving beyond the development of knowledge and awareness into the development of skills that will assist practitioners to practice in a culturally competent manner. The model builds on basic counseling skills, combining them with specific approaches…

  3. The Semi-opened Infrastructure Model (SopIM): A Frame to Set Up an Organizational Learning Process

    NASA Astrophysics Data System (ADS)

    Grundstein, Michel

    In this paper, we introduce the "Semi-opened Infrastructure Model (SopIM)" implemented to deploy Artificial Intelligence and Knowledge-based Systems within a large industrial company. This model illustrates what could be two of the operating elements of the Model for General Knowledge Management within the Enterprise (MGKME) that are essential to set up the organizational learning process that leads people to appropriate and use concepts, methods and tools of an innovative technology: the "Ad hoc Infrastructures" element, and the "Organizational Learning Processes" element.

  4. PECHCV, PECHFV, PEFHCV and PEFHFV: A set of atmospheric, primitive equation forecast models for the Northern Hemisphere, volume 3

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.; Pearce, M. L.

    1976-01-01

    As part of the SEASAT program of NASA, a set of four hemispheric, atmospheric prediction models were developed. The models, which use a polar stereographic grid in the horizontal and a sigma coordinate in the vertical, are: (1) PECHCV - five sigma layers and a 63 x 63 horizontal grid, (2) PECHFV - ten sigma layers and a 63 x 63 horizontal grid, (3) PEFHCV - five sigma layers and a 187 x 187 horizontal grid, and (4) PEFHFV - ten sigma layers and a 187 x 187 horizontal grid. The models and associated computer programs are described.

  5. Global data set of biogenic VOC emissions calculated by the MEGAN model over the last 30 years

    NASA Astrophysics Data System (ADS)

    Sindelarova, K.; Granier, C.; Bouarar, I.; Guenther, A.; Tilmes, S.; Stavrakou, T.; Müller, J.-F.; Kuhn, U.; Stefani, P.; Knorr, W.

    2014-09-01

    The Model of Emissions of Gases and Aerosols from Nature (MEGANv2.1) together with the Modern-Era Retrospective Analysis for Research and Applications (MERRA) meteorological fields were used to create a global emission data set of biogenic volatile organic compounds (BVOC) available on a monthly basis for the time period of 1980-2010. This data set, developed under the Monitoring Atmospheric Composition and Climate project (MACC), is called MEGAN-MACC. The model estimated mean annual total BVOC emission of 760 Tg (C) yr-1 consisting of isoprene (70%), monoterpenes (11%), methanol (6%), acetone (3%), sesquiterpenes (2.5%) and other BVOC species each contributing less than 2%. Several sensitivity model runs were performed to study the impact of different model input and model settings on isoprene estimates and resulted in differences of up to ±17% of the reference isoprene total. A greater impact was observed for a sensitivity run applying parameterization of soil moisture deficit that led to a 50% reduction of isoprene emissions on a global scale, most significantly in specific regions of Africa, South America and Australia. MEGAN-MACC estimates are comparable to results of previous studies. More detailed comparison with other isoprene inventories indicated significant spatial and temporal differences between the data sets especially for Australia, Southeast Asia and South America. MEGAN-MACC estimates of isoprene, α-pinene and group of monoterpenes showed a reasonable agreement with surface flux measurements at sites located in tropical forests in the Amazon and Malaysia. The model was able to capture the seasonal variation of isoprene emissions in the Amazon forest.

  6. Development of a Model of Care for Rehabilitation of People Living With HIV in a Semirural Setting in South Africa

    PubMed Central

    Hanass-Hancock, Jill

    2014-01-01

    Background Human immunodeficiency virus continues to challenge health care professionals even after the rollout of antiretroviral therapy. South Africa, among the worst affected countries in the world by the pandemic, has seen the effect of people living longer but facing disabling effects of both the virus and the associated impairments of the antiretroviral therapy. Rehabilitation within the evolving context of the disease has changed its focus from the impairment of the individual to the participation restriction within a person’s daily life. Offering a continuum of coordinated, multilevel, multidiscipline, evidence-based rehabilitation within health care will promote its prominence in health care structures. Objective This study aims to develop a model of care within a health care structure using a semi-rural African setting as an example. Methods The study will employ mixed methods using a Learning in Action Approach into the rehabilitation of people living with HIV (PLHIV) at the study setting. The Delphi technique, a multistage consensus method, will be used to obtain feedback from a number of local experts relevant for the field of rehabilitation of people living with HIV. The study will also involve various stakeholders such as the multidisciplinary health care team (doctors, physiotherapists, occupational therapists, dieticians, speech and language therapists, social workers, midlevel workers, community health care workers); department of health representative(s); site affiliated nongovernmental organization representative(s); and service users at the study setting. Results Once a proposed model of care is derived, the model will be assessed for rigour and piloted at the study setting. Conclusions The development of a model of care in rehabilitation for PLHIV in a health care setting is aimed to provide an example of a continuum of coordinated service throughout the disease trajectory. The assumption is that the burden on the health care system will be

  7. Resolving the age of Wilson Creek Formation tephras and the Mono Lake excursion using high-resolution SIMS dating of allanite and zircon rims

    NASA Astrophysics Data System (ADS)

    Vazquez, J. A.; Lidzbarski, M. I.

    2012-12-01

    Sediments of the Wilson Creek Formation surrounding Mono Lake preserve a high-resolution archive of glacial and pluvial responses along the eastern Sierra Nevada due to late Pleistocene climate change. An absolute chronology for the Wilson Creek stratigraphy is critical for correlating the paleoclimate record to other archives in the western U.S. and the North Atlantic region. However, multiple attempts to date the Wilson Creek stratigraphy using carbonates and interbedded rhyolitic tephras yield discordant 14C and 40Ar/39Ar results due to open-system effects, carbon reservoir uncertainties, as well as abundant xenocrysts entrained during eruption. Ion microprobe (SIMS) 238U-230Th dating of the final increments of crystallization recorded by allanite and zircon autocrysts from juvenile pyroclasts yields ages that effectively date eruption of key tephra beds and resolve age uncertainties about the Wilson Creek stratigraphy. To date the final several micrometers of crystal growth, individual allanite and zircon crystals were embedded in soft indium to allow sampling of unpolished rims. Isochron ages derived from rims on coexisting allanite and zircon (± glass) from hand-selected pumiceous pyroclasts delimit the timing of Wilson Creek sedimentation between Ashes 7 and 19 (numbering of Lajoie, 1968) to the interval between ca. 27 to ca. 62 ka. The interiors of individual allanite and zircon crystals sectioned in standard SIMS mounts yield model 238U-230Th ages that are mostly <10 k.y. older than their corresponding rim age, suggesting a relatively brief interval of allanite + zircon crystallization before eruption. A minority of allanite and zircon crystals yield rim and interior model ages of ca. 90-100 ka, and are likely to be antecrysts recycled from relatively early Mono Craters volcanism and/or intrusions. Tephra (Ash 15) erupted during the geomagnetic excursion originally designated the Mono Lake excursion yields a rim isochron age of ca. 41 ka indicating that

  8. Using fuzzy sets to model the uncertainty in the fault location process of distribution networks

    SciTech Connect

    Jaerventausta, P.; Verho, P.; Partanen, J. )

    1994-04-01

    In the computerized fault diagnosis of distribution networks the heuristic knowledge of the control center operators can be combined with the information obtained from the network data base and SCADA system. However, the nature of the heuristic knowledge is inexact and uncertain. Also the information obtained from the remote control system contains uncertainty and may be incorrect, conflicting or inadequate. This paper proposes a method based on fuzzy set theory to deal with the uncertainty involved in the process of locating faults in distribution networks. The method is implemented in a prototype version of the distribution network operation support system.

  9. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C., Jr.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  10. A Multilevel Regression Model for Geographical Studies in Sets of Non-Adjacent Cities

    PubMed Central

    Marí-Dell’Olmo, Marc; Martínez-Beneito, Miguel Ángel

    2015-01-01

    In recent years, small-area-based ecological regression analyses have been published that study the association between a health outcome and a covariate in several cities. These analyses have usually been performed independently for each city and have therefore yielded unrelated estimates for the cities considered, even though the same process has been studied in all of them. In this study, we propose a joint ecological regression model for multiple cities that accounts for spatial structure both within and between cities and explore the advantages of this model. The proposed model merges both disease mapping and geostatistical ideas. Our proposal is compared with two alternatives, one that models the association for each city as fixed effects and another that treats them as independent and identically distributed random effects. The proposed model allows us to estimate the association (and assess its significance) at locations with no available data. Our proposal is illustrated by an example of the association between unemployment (as a deprivation surrogate) and lung cancer mortality among men in 31 Spanish cities. In this example, the associations found were far more accurate for the proposed model than those from the fixed effects model. Our main conclusion is that ecological regression analyses can be markedly improved by performing joint analyses at several locations that share information among them. This finding should be taken into consideration in the design of future epidemiological studies. PMID:26308613

  11. TIME SERIES MODELS OF THREE SETS OF RXTE OBSERVATIONS OF 4U 1543-47

    SciTech Connect

    Koen, C.

    2013-03-01

    The X-ray nova 4U 1543-47 was in a different physical state (low/hard, high/soft, and very high) during the acquisition of each of the three time series analyzed in this paper. Standard time series models of the autoregressive moving average (ARMA) family are fitted to these series. The low/hard data can be adequately modeled by a simple low-order model with fixed coefficients, once the slowly varying mean count rate has been accounted for. The high/soft series requires a higher order model, or an ARMA model with variable coefficients. The very high state is characterized by a succession of 'dips', with roughly equal depths. These seem to appear independently of one another. The underlying stochastic series can again be modeled by an ARMA form, or roughly as the sum of an ARMA series and white noise. The structuring of each model in terms of short-lived aperiodic and 'quasi-periodic' components is discussed.

  12. Spatial distribution of ultrafine particles in urban settings: A land use regression model

    NASA Astrophysics Data System (ADS)

    Rivera, Marcela; Basagaña, Xavier; Aguilera, Inmaculada; Agis, David; Bouso, Laura; Foraster, Maria; Medina-Ramón, Mercedes; Pey, Jorge; Künzli, Nino; Hoek, Gerard

    2012-07-01

    BackgroundThe toxic effects of ultrafine particles (UFP) are a public health concern. However, epidemiological studies on the long term effects of UFP are limited due to lacking exposure models. Given the high spatial variation of UFP, the assignment of exposure levels in epidemiological studies requires a fine spatial scale. The aim of this study was to assess the performance of a short-term measurement protocol used at a large number of locations to derive a land use regression (LUR) model of the spatial variation of UFP in Girona, Spain. MethodsWe measured UFP for 15 min on the sidewalk of 644 participants' homes in 12 towns of Girona province (Spain). The measurements were done during non-rush traffic hours 9:15-12:45 and 15:15-16:45 during 32 days between June 15 and July 31, 2009. In parallel, we counted the number of vehicles driving in both directions. Measurements were repeated on a different day for a subset of 25 sites in Girona city. Potential predictor variables such as building density, distance to bus lines and land cover were derived using geographic information systems. We adjusted for temporal variation using daily mean NOx concentrations at a central monitor. Land use regression models for the entire area (Core model) and for individual towns were derived using a supervised forward selection algorithm. ResultsThe best predictors of UFP were traffic intensity, distance to nearest major crossroad, area of high density residential land and household density. The LUR Core model explained 36% of UFP total variation. Adding sampling date and hour of the day to the Core model increased the R2 to 51% without changing the regression slopes. Local models included predictor variables similar to those in the Core model, but performed better with an R2 of 50% in Girona city. Independent LUR models for the first and second measurements at the subset of sites with repetitions had R2's of about 47%. When the mean of the two measurements was used R2 improved to

  13. REGRESSION APPROXIMATIONS FOR TRANSPORT MODEL CONSTRAINT SETS IN COMBINED AQUIFER SIMULATION-OPTIMIZATION STUDIES.

    USGS Publications Warehouse

    Alley, William M.

    1986-01-01

    Problems involving the combined use of contaminant transport models and nonlinear optimization schemes can be very expensive to solve. This paper explores the use of transport models with ordinary regression and regression on ranks to develop approximate response functions of concentrations at critical locations as a function of pumping and recharge at decision wells. These response functions combined with other constraints can often be solved very easily and may suggest reasonable starting points for combined simulation-management modeling or even relatively efficient operating schemes in themselves.

  14. Litho-kinematic facies model for large landslide deposits in arid settings

    SciTech Connect

    Yarnold, J.C.; Lombard, J.P.

    1989-04-01

    Reconnaissance field studies of six large landslide deposits in the S. Basin and Range suggest that a set of characteristic features is common to the deposits of large landslides in an arid setting. These include a coarse boulder cap, an upper massive zone, a lower disrupted zone, and a mixed zone overlying disturbed substrate. The upper massive zone is dominated by crackel breccia. This grades downward into a lower disrupted zone composed of a more matrix-rich breccia that is internally sheared, intruded by clastic dikes, and often contains a cataclasite layer at its base. An underlying discontinuous mixed zone is composed of material from the overlying breccia mixed with material entrained from the underlying substrate. Bedding in the substrate sometimes displays folding and contortion that die out downward. The authors work suggests a spatial zonation of these characteristic features within many landslide deposits. In general, clastic dikes, the basal cataclasite, and folding in the substrate are observed mainly in distal parts of landslides. In most cases, total thickness, thickness of the basal disturbed and mixed zones, and the degree of internal shearing increase distally, whereas maximum clast size commonly decreases distally. Zonation of these features is interpreted to result from kinematics of emplacement that cause generally increased deformation in the distal regions of the landslide.

  15. Deconstructing myths, building alliances: a networking model to enhance tobacco control in hospital mental health settings.

    PubMed

    Ballbè, Montse; Gual, Antoni; Nieva, Gemma; Saltó, Esteve; Fernández, Esteve

    2016-01-01

    Life expectancy for people with severe mental disorders is up to 25 years less in comparison to the general population, mainly due to diseases caused or worsened by smoking. However, smoking is usually a neglected issue in mental healthcare settings. The aim of this article is to describe a strategy to improve tobacco control in the hospital mental healthcare services of Catalonia (Spain). To bridge this gap, the Catalan Network of Smoke-free Hospitals launched a nationwide bottom-up strategy in Catalonia in 2007. The strategy relied on the creation of a working group of key professionals from various hospitals -the early adopters- based on Rogers' theory of the Diffusion of Innovations. In 2016, the working group is composed of professionals from 17 hospitals (70.8% of all hospitals in the region with mental health inpatient units). Since 2007, tobacco control has improved in different areas such as increasing mental health professionals' awareness of smoking, training professionals on smoking cessation interventions and achieving good compliance with the national smoking ban. The working group has produced and disseminated various materials, including clinical practice and best practice guidelines, implemented smoking cessation programmes and organised seminars and training sessions on smoking cessation measures in patients with mental illnesses. The next challenge is to ensure effective follow-up for smoking cessation after discharge. While some areas of tobacco control within these services still require significant improvement, the aforementioned initiative promotes successful tobacco control in these settings. PMID:27325123

  16. Brief Strategic Family Therapy: Implementing evidence-based models in community settings

    PubMed Central

    Szapocznik, José; Muir, Joan A.; Duff, Johnathan H.; Schwartz, Seth J.; Brown, C. Hendricks

    2014-01-01

    Reflecting a nearly 40-year collaborative partnership between clinical researchers and clinicians, the present article reviews the authors’ experience in developing, investigating, and implementing the Brief Strategic Family Therapy (BSFT) model. The first section of the article focuses on the theory, practice, and studies related to this evidence-based family therapy intervention targeting adolescent drug abuse and delinquency. The second section focuses on the implementation model created for the BSFT intervention– a model that parallels many of the recommendations furthered within the implementation science literature. Specific challenges encountered during the BSFT implementation process are reviewed, along with ways of conceptualizing and addressing these challenges from a systemic perspective. The implementation approach that we employ uses the same systemic principles and intervention techniques as those that underlie the BSFT model itself. Recommendations for advancing the field of implementation science, based on our on-the-ground experiences, are proposed. PMID:24274187

  17. Climatically Diverse Data Set for Flat-Plate PV Module Model Validations (Presentation)

    SciTech Connect

    Marion, B.

    2013-05-01

    Photovoltaic (PV) module I-V curves were measured at Florida, Colorado, and Oregon locations to provide data for the validation and development of models used for predicting the performance of PV modules.

  18. Using a set of GM(1,1) models to predict values of diagnostic symptoms

    NASA Astrophysics Data System (ADS)

    Tabaszewski, Maciej; Cempel, Czeslaw

    2015-02-01

    The main purpose of this study is to develop a methodology of predicting values of vibration symptoms of fan mills in a combined heat and power (CHP) plant. The study was based on grey system theory and GM(1,1) prognostic models with different window sizes for estimating model parameters. Such models have a number of features that are desirable from the point of view of data characteristics collected by the diagnostic system. When using moving window, GM(1,1) models tend to be adaptive. However, selecting an inappropriate window size can result in excessive forecast errors. The present study proposes three possible methods that can be used in automated diagnostic systems to counteract the excessive increase in the forecast error. A comparative analysis of their performance was conducted using data from fan mills in order to select the method which minimises the forecast error.

  19. Analysis of energy balance models using the ERBE data set. Final Report

    SciTech Connect

    Graves, C.E.; North, G.R.

    1991-04-01

    A review of Energy Balance Models is presented. Results from the Outgoing Longwave Radiation parameterization are discussed. The albedo parameterizations and the consequences of the new parameterizations are examined.

  20. Interpretation of Lidar and Satellite Data Sets Using a Global Photochemical Model

    NASA Technical Reports Server (NTRS)

    Zenker, Thomas; Chyba, Thomas

    1999-01-01

    A primary goal of the NASA Tropospheric Chemistry Program (TCP) is to "contribute substantially to scientific understanding of human impacts on the global troposphere". In order to analyze global or regional trends and factors of the troposphere chemistry, for example, its oxidation capacity or composition, a continuous global/regional data coverage as well as model simulations are needed. The Global Tropospheric Experiment (GTE), a major component of the TCP, provides data vital to these questions via aircraft measurement of key trace chemical species in various remote regions of the world. Another component in NASA's effort are satellite projects for exploration of tropospheric chemistry and dynamics. A unique data product is the Tropospheric Ozone Residual (TOR) utilizing global tropospheric ozone data. Another key research tool are simulation studies of atmospheric chemistry and dynamics for the theoretical understanding of the atmosphere, the extrapolation of observed trends, and for sensitivity studies assessing a changing anthropogenic impact to air chemistry and climate. In the context with model simulations, field data derived from satellites or (airborne) field missions are needed for two purposes: 1. To initialize and validate model simulations, and 2., to interpret field data by comparison to model simulation results in order to analyze global or regional trends and deviations from standard tropospheric chemistry and transport conditions as defined by the simulations. Currently, there is neither a sufficient global data coverage available nor are existing well established global circulation models. The NASA LARC CTM model is currently not yet in a state to accomplish a sufficient tropospheric chemistry simulation, so that the current research under this cooperative agreement focuses on utilizing field data products for direct interpretation. They will be also available for model testing and a later interpretation with a finally utilized model.

  1. Comparing Regional Climate Model output to observational data sets for extreme rainfall

    NASA Astrophysics Data System (ADS)

    Sunyer, M. A.; Sørup, H. J. D.; Madsen, H.; Rosbjerg, D.; Arnbjerg-Nielsen, K.

    2012-04-01

    Climate model projections of changes in extreme rainfall are highly uncertain. In general, the analysis of model performance is the first step in studies that attempt to deal with this uncertainty. Model performance is often measured by comparing statistical properties of climate model output with observational data. However, in the assessment of model performance regarding extreme rainfall use of different observational datasets might lead to different conclusions. Rainfall data are often available either as point measurements or interpolated gridded data. Point measurements result in an unevenly spatially distributed dataset while gridded data obtained from the interpolation of point measurements provide data on an evenly distributed grid. Measurements of extreme rainfall events may be highly uncertain and underestimation is generally expected; furthermore, in gridded data extreme rainfall events tend to be smoothed due to the interpolation process. In addition, small variations in space and time of observed and modelled extremes may have a large impact on the assessment. The present study assesses the effect of the choice and interpretation of observation datasets on the conclusions drawn regarding the ability of Regional Climate Models (RCMs) to reproduce extreme events. Daily extreme rainfall over Denmark from an ensemble of RCMs is compared to three different observational datasets. The observational data considered are a point measurement dataset (ECA&D), a gridded dataset (E-Obs) and a re-analysis dataset (ERA-Interim). The results are compared with other recent studies considering climate model rainfall extremes. The study shows that in climate change studies dealing with extreme rainfall one must be aware of the effect and uncertainties from the use of different sources of observations to avoid overconfident and misleading conclusions.

  2. Comparing complementary NWP model performance for hydrologic forecasting for the river Rhine in an operational setting

    NASA Astrophysics Data System (ADS)

    Davids, Femke; den Toom, Matthijs

    2016-04-01

    This paper investigates the performance of complementary NWP models for hydrologic forecasting for the river Rhine, a large river catchment in Central Europe. An operational forecasting system, RWsOS-Rivieren, produces daily forecasts of discharges and water levels at the Water Management Centre Netherlands. A combination of HBV (rainfall-runoff) and SOBEK (hydrodynamic routing) models is used to produce simulations and forecasts for the catchment. Data assimilation is applied both to the model state of SOBEK and to model outputs. The primary function of the operational forecasting system is to provide reliable and accurate forecasts during periods of high water. The secondary main function is producing daily predictions for water management and water transport in The Netherlands. In addition, predicting water levels during drought periods is becoming increasingly important as well. At this moment several complementary deterministic and ensemble NWP models are used to provide the forecasters with predictions with varied initial conditions, such as ICON, ICON-EU Nest, ECMWF-DET, ECMWF-EPS, HiRLAM, COSMO-LEPS and GLAMEPS. ICON and ICON-EU have recently replaced DWD-GME and DWD COSMO-EU. These models provide weather forecasts with different lengths of lead times and also different periods of operational usage. A direct and quantitative comparison is therefore challenging. Nevertheless, it is important to investigate the suitability of the different NWP models for certain lead times and certain weather situations to help support the hydrological forecasters make an informed forecast during an operational crisis. A hindcast study will investigate the performance of these models in the operational system for different lead times and focusing on periods of both high and low water for Lobith, the location of entry of the river Rhine into The Netherlands.

  3. Interpretation of lidar and satellite data sets using a global photochemical model

    NASA Technical Reports Server (NTRS)

    Chyba, Thomas; Zenker, Thomas (Principal Investigator)

    1996-01-01

    The status in the beginning of the report period was that the existing General Circulation Model (GCM) was running with a chemistry module compiled for stratospheric simulation studies. The chemistry simulation was not working sufficiently in the troposphere and any tropospheric trace gas sources or dry deposition sinks were not yet incorporated. The current status concerning the chemistry module is that the chemistry simulation has been modified to also simulate the chemistry in the troposphere with resulting mixing ratios close to other model simulations as described in Olson et al. (1996). The mechanism to incorporate trace gas source and dry deposition sinks, testing for H202, CH300H, 03, HCHO, HN03, and N02, are incorporated and is currently being tested. Existing model and development versions include: the full GCM model, currently still running with the original stratospheric chemistry module; an off-line version of the GCM, i.e. wind and photolysis rates are pre-calculated and prescribed in read-in arrays; a box model version of the modified chemistry module for developments and first tests of new modifications; and a box model with the same chemistry simulated but flexible partitioning and integration methods for test purposes of those. The grantee's work focused mainly on three areas: trace gas sources and dry deposition; method to introduce a NO source instead of a NO(x) source; and investigating integration methods.

  4. Depositional sequence stratigraphy and architecture of the cretaceous ferron sandstone: Implications for coal and coalbed methane resources - A field excursion

    USGS Publications Warehouse

    Garrison, J.R., Jr.; Van Den, Bergh, T. C. V.; Barker, C.E.; Tabet, D.E.

    1997-01-01

    This Field Excursion will visit outcrops of the fluvial-deltaic Upper Cretaceous (Turonian) Ferron Sandstone Member of the Mancos Shale, known as the Last Chance delta or Upper Ferron Sandstone. This field guide and the field stops will outline the architecture and depositional sequence stratigraphy of the Upper Ferron Sandstone clastic wedge and explore the stratigraphic positions and compositions of major coal zones. The implications of the architecture and stratigraphy of the Ferron fluvial-deltaic complex for coal and coalbed methane resources will be discussed. Early works suggested that the southwesterly derived deltaic deposits of the the upper Ferron Sandstone clastic wedge were a Type-2 third-order depositional sequence, informally called the Ferron Sequence. These works suggested that the Ferron Sequence is separated by a type-2 sequence boundary from the underlying 3rd-order Hyatti Sequence, which has its sediment source from the northwest. Within the 3rd-order depositional sequence, the deltaic events of the Ferron clastic wedge, recognized as parasequence sets, appear to be stacked into progradational, aggradational, and retrogradational patterns reflecting a generally decreasing sediment supply during an overall slow sea-level rise. The architecture of both near-marine facies and non-marine fluvial facies exhibit well defined trends in response to this decrease in available sediment. Recent studies have concluded that, unless coincident with a depositional sequence boundary, regionally extensive coal zones occur at the tops of the parasequence sets within the Ferron clastic wedge. These coal zones consist of coal seams and their laterally equivalent fissile carbonaceous shales, mudstones, and siltstones, paleosols, and flood plain mudstones. Although the compositions of coal zones vary along depositional dip, the presence of these laterally extensive stratigraphic horizons, above parasequence sets, provides a means of correlating and defining the tops

  5. Micro-electro-mechanical systems/near-infrared validation of different sampling modes and sample sets coupled with multiple models.

    PubMed

    Wu, Zhisheng; Shi, Xinyuan; Wan, Guang; Xu, Manfei; Zhan, Xueyan; Qiao, Yanjiang

    2015-01-01

    The aim of the present study was to demonstrate the reliability of micro-electro-mechanical systems/near-infrared technology by investigating analytical models of two modes of sampling (integrating sphere and fiber optic probe modes) and different sample sets. Baicalin in Yinhuang tablets was used as an example, and the experimental procedure included the optimization of spectral pretreatments, selection of wavelength regions using interval partial least squares, moving window partial least squares, and validation of the method using an accuracy profile. The results demonstrated that models that use the integrating sphere mode are better than those that use fiber optic probe modes. Spectra that use fiber optic probe modes tend to be more susceptible to interference information because the intensity of the incident light on a fiber optic probe mode is significantly weaker than that on an integrating sphere mode. According to the test set validation result of the method parameters, such as accuracy, precision, risk, and linearity, the selection of variables was found to make no significant difference to the performance of the full spectral model. The performance of the models whose sample sets ranged widely in concentration (i.e., 1-4 %) was found to be better than that of models whose samples had relatively narrow ranges (i.e., 1-2 %). The establishment and validation of this method can be used to clarify the analytical guideline in Chinese herbal medicine about two sampling modes and different sample sets in the micro-electro-mechanical systems/near-infrared technique. PMID:25626144

  6. Tuning the Field Trip: Audio-Guided Tours as a Replacement for 1-Day Excursions in Human Geography

    ERIC Educational Resources Information Center

    Wissmann, Torsten

    2013-01-01

    Educators are experiencing difficulties with 1-day field trips in human geography. Instead of teaching students how to apply theory in the field and learn to "sense" geography in everyday life, many excursions have degraded into tourist-like events where lecturers try to motivate rather passive students against a noisy urban backdrop.…

  7. Characteristics of the Nordic Seas overflows in a set of Norwegian Earth System Model experiments

    NASA Astrophysics Data System (ADS)

    Guo, Chuncheng; Ilicak, Mehmet; Bentsen, Mats; Fer, Ilker

    2016-08-01

    Global ocean models with an isopycnic vertical coordinate are advantageous in representing overflows, as they do not suffer from topography-induced spurious numerical mixing commonly seen in geopotential coordinate models. In this paper, we present a quantitative diagnosis of the Nordic Seas overflows in four configurations of the Norwegian Earth System Model (NorESM) family that features an isopycnic ocean model. For intercomparison, two coupled ocean-sea ice and two fully coupled (atmosphere-land-ocean-sea ice) experiments are considered. Each pair consists of a (non-eddying) 1° and a (eddy-permitting) 1/4° horizontal resolution ocean model. In all experiments, overflow waters remain dense and descend to the deep basins, entraining ambient water en route. Results from the 1/4° pair show similar behavior in the overflows, whereas the 1° pair show distinct differences, including temperature/salinity properties, volume transport (Q), and large scale features such as the strength of the Atlantic Meridional Overturning Circulation (AMOC). The volume transport of the overflows and degree of entrainment are underestimated in the 1° experiments, whereas in the 1/4° experiments, there is a two-fold downstream increase in Q, which matches observations well. In contrast to the 1/4° experiments, the coarse 1° experiments do not capture the inclined isopycnals of the overflows or the western boundary current off the Flemish Cap. In all experiments, the pathway of the Iceland-Scotland Overflow Water is misrepresented: a major fraction of the overflow proceeds southward into the West European Basin, instead of turning westward into the Irminger Sea. This discrepancy is attributed to excessive production of Labrador Sea Water in the model. The mean state and variability of the Nordic Seas overflows have significant consequences on the response of the AMOC, hence their correct representations are of vital importance in global ocean and climate modelling.

  8. Groundwater flow pattern and related environmental phenomena in complex geologic setting based on integrated model construction

    NASA Astrophysics Data System (ADS)

    Tóth, Ádám; Havril, Tímea; Simon, Szilvia; Galsa, Attila; Monteiro Santos, Fernando A.; Müller, Imre; Mádl-Szőnyi, Judit

    2016-08-01

    Groundwater flow, driven, controlled and determined by topography, geology and climate, is responsible for several natural surface manifestations and affected by anthropogenic processes. Therefore, flowing groundwater can be regarded as an environmental agent. Numerical simulation of groundwater flow could reveal the flow pattern and explain the observed features. In complex geologic framework, where the geologic-hydrogeologic knowledge is limited, the groundwater flow model could not be constructed based solely on borehole data, but geophysical information could aid the model building. The integrated model construction was presented via the case study of the Tihany Peninsula, Hungary, with the aims of understanding the background and occurrence of groundwater-related environmental phenomena, such as wetlands, surface water-groundwater interaction, slope instability, and revealing the potential effect of anthropogenic activity and climate change. The hydrogeologic model was prepared on the basis of the compiled archive geophysical database and the results of recently performed geophysical measurements complemented with geologic-hydrogeologic data. Derivation of different electrostratigraphic units, revealing fracturing and detecting tectonic elements was achieved by systematically combined electromagnetic geophysical methods. The deduced information can be used as model input for groundwater flow simulation concerning hydrostratigraphy, geometry and boundary conditions. The results of numerical modelling were interpreted on the basis of gravity-driven regional groundwater flow concept and validated by field mapping of groundwater-related phenomena. The 3D model clarified the hydraulic behaviour of the formations, revealed the subsurface hydraulic connection between groundwater and wetlands and displayed the groundwater discharge pattern, as well. The position of wetlands, their vegetation type, discharge features and induced landslides were explained as

  9. Simulation modeling based method for choosing an effective set of fault tolerance mechanisms for real-time avionics systems

    NASA Astrophysics Data System (ADS)

    Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.

    2013-12-01

    In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.

  10. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    PubMed

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054

  11. A new data set of soil mineralogy for dust-cycle modeling

    NASA Astrophysics Data System (ADS)

    Journet, E.; Balkanski, Y.; Harrison, S. P.

    2013-09-01

    The mineralogy of airborne dust affects the impact of dust particles on direct and indirect radiative forcing, on atmospheric chemistry and on biogeochemical cycling. It is determined partly by the mineralogy of the dust-source regions and partly by size-dependent fractionation during erosion and transport. Here we present a data set that characterizes the clay and silt sized fractions of global soil units in terms of the abundance of 12 minerals that are important for dust-climate interactions: quartz, feldspars, illite, smectite, kaolinite, chlorite, vermiculite, mica, calcite, gypsum, hematite and goethite. The basic mineralogical information is derived from the literature, and is then expanded following explicit rules, in order to characterize as many soil units as possible. We present three alternative realisations of the mineralogical maps that account for the uncertainties in the mineralogical data. We examine the implications of the new database for calculations of the single scattering albedo of airborne dust and thus for dust radiative forcing.

  12. External Validation of an Instructional Design Model for High Fidelity Simulation: Model Application in a Hospital Setting

    ERIC Educational Resources Information Center

    Wilson, Rebecca D.

    2011-01-01

    The purpose of this study was to investigate the use of the design characteristics component of the Jeffries/National League for Nursing Framework for Designing, Implementing, and Evaluating Simulations when developing a simulation-based approach to teaching structured communication to new graduate nurses. The setting for the study was a medium…

  13. Limit sets for natural extensions of Schelling’s segregation model

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Vainchtein, Dmitri; Weiss, Howard

    2011-07-01

    Thomas Schelling developed an influential demographic model that illustrated how, even with relatively mild assumptions on each individual's nearest neighbor preferences, an integrated city would likely unravel to a segregated city, even if all individuals prefer integration. Individuals in Schelling's model cities are divided into two groups of equal number and each individual is "happy" or "unhappy" when the number of similar neighbors cross a simple threshold. In this manuscript we consider natural extensions of Schelling's original model to allow the two groups have different sizes and to allow different notions of happiness of an individual. We observe that differences in aggregation patterns of majority and minority groups are highly sensitive to the happiness threshold; for low threshold, the differences are small, and when the threshold is raised, striking new patterns emerge. We also observe that when individuals strongly prefer to live in integrated neighborhoods, the final states exhibit a new tessellated-like structure.

  14. Observational constraints to Ricci dark energy model by using: SN, BAO, OHD, fgas data sets

    SciTech Connect

    Xu, Lixin; Wang, Yuting E-mail: wangyuting0719@163.com

    2010-06-01

    We perform a global constraint on the Ricci dark energy model with both the flat case and the non-flat case, using the Markov Chain Monte Carlo (MCMC) method and the combined observational data from the cluster X-ray gas mass fraction, Supernovae of type Ia (397), baryon acoustic oscillations, current Cosmic Microwave Background, and the observational Hubble function. In the flat model, we obtain the best fit values of the parameters in 1σ,2σ regions: Ω{sub m0} = 0.2927{sup +0.0420+0.0542}{sub −0.0323−0.0388}, α = 0.3823{sup +0.0331+0.0415}{sub −0.0418−0.0541}, Age/Gyr = 13.48{sup +0.13+0.17}{sub −0.16−0.21}, H{sub 0} = 69.09{sup +2.56+3.09}{sub −2.37−3.39}. In the non-flat model, the best fit parameters are found in 1σ,2σ regions:Ω{sub m0} = 0.3003{sup +0.0367+0.0429}{sub −0.0371−0.0423}, α = 0.3845{sup +0.0386+0.0521}{sub −0.0474−0.0523}, Ω{sub k} = 0.0240{sup +0.0109+0.0133}{sub −0.0130−0.0153}, Age/Gyr = 12.54{sup +0.51+0.65}{sub −0.37−0.49}, H{sub 0} = 72.89{sup +3.31+3.88}{sub −3.05−3.72}. Compared to the constraint results in the ΛCDM model by using the same datasets, it is shown that the current combined datasets prefer the ΛCDM model to the Ricci dark energy model.

  15. Numerical Modeling of Coupled Groundwater and Surface Water Interactions in an Urban Setting

    SciTech Connect

    Rihani, J F; Maxwell, R M

    2007-09-26

    The Dominguez Channel Watershed (DCW), located in the southern portion of Los Angeles County (Figure A.1), drains about 345 square miles into the Los Angeles Harbor. The cities and jurisdictions in DCW are shown in Figure A.2. The largest of these include the cities of Los Angeles, Carson, and Torrance. This watershed is unique in that 93% of its land area is highly developed (i.e. urbanized). The watershed boundaries are defined by a complex network of storm drains and flood control channels, rather than being defined by natural topography. Table (1) shows a summary of different land uses in the Dominguez Channel Watershed (MEC, 2004). The Dominguez Watershed has the highest impervious area of all watersheds in the Los Angeles region. The more impervious the surface, the more runoff is generated during a storm. Storm water runoff can carry previously accumulated contaminants and transport them into receiving water systems. Point sources such as industrial wastewater and municipal sewage as well as urban runoff from commercial, residential, and industrial areas are all recognized as contributors to water quality degradation at DWC. Section 303(d) of the 1972 Federal Clean Water Act (CWA) requires states to identify and report all waters not meeting water quality standards and to develop action plans to pursue the water quality objectives. These plans specify the maximum amount of a given pollutant that the water body of concern can receive and still meet water quality standards. Such plans are called Total Maximum Daily Loads (TMDLs). TMDLs also specify allocations of pollutant loadings to point and non-point sources taking into account natural background pollutant levels. This demonstrates the importance of utilizing scientific tools, such as flow and transport models, to identify contaminant sources, understand integrated flow paths, and assess the effectiveness of water quality management strategies. Since overland flow is a very important component of the water

  16. Partial record of a Miocene geomagnetic field excursion: Paleomagnetic data from the Paiute Ridge volcanic center, southern Nevada

    SciTech Connect

    Ratcliff, C.D.; Geissman, J.W.; Perry, F.V. ); Crowe, B.M. )

    1993-04-01

    In the Palute Ridge area, northern Halfpint Range, a complex system of late Miocene (about 8.5 Ma) intrusiv