Sample records for previous modeling efforts

  1. On-Board Sound Intensity (OBSI) study : phase 2.

    DOT National Transportation Integrated Search

    2014-05-01

    This is a continuation effort of previous research (Modeling of Quieter Pavement in Florida) : and as such is a sister report to the previous final report. Both research efforts pertain to the : noise created at the tire/pavement interface, which con...

  2. An Overview of the Human Systems Integration Division

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2015-01-01

    This presentation will provide an overview of the Human Systems Integration Division, and will highlight some of the human performance modeling efforts undertaken in previously presented MIDAS human performance modeling efforts.

  3. Grading System and Student Effort

    ERIC Educational Resources Information Center

    Paredes, Valentina

    2017-01-01

    Several papers have proposed that the grading system affects students' incentives to exert effort. In particular, the previous literature has compared student effort under relative and absolute grading systems, but the results are mixed and the implications of the models have not been empirically tested. In this paper, I build a model where…

  4. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  5. Estimating current modal splits.

    DOT National Transportation Integrated Search

    2005-11-01

    This project is the second part in a two part modeling effort. In previous work*, mode : choice was modeled by examining characteristics of individuals and the trips they make. : A study of the choices of individuals is necessary for a fundamental un...

  6. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  7. Propulsion System Dynamic Modeling of the NASA Supersonic Concept Vehicle for AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Seiel, Jonathan

    2016-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report

  8. Propulsion System Dynamic Modeling for the NASA Supersonic Concept Vehicle: AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.propulsion system dynamics, the structural dynamics, and aerodynamics.

  9. Propulsion System Dynamic Modeling of the NASA Supersonic Concept Vehicle for AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the lowboom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.

  10. Effortful versus automatic emotional processing in schizophrenia: Insights from a face-vignette task.

    PubMed

    Patrick, Regan E; Rastogi, Anuj; Christensen, Bruce K

    2015-01-01

    Adaptive emotional responding relies on dual automatic and effortful processing streams. Dual-stream models of schizophrenia (SCZ) posit a selective deficit in neural circuits that govern goal-directed, effortful processes versus reactive, automatic processes. This imbalance suggests that when patients are confronted with competing automatic and effortful emotional response cues, they will exhibit diminished effortful responding and intact, possibly elevated, automatic responding compared to controls. This prediction was evaluated using a modified version of the face-vignette task (FVT). Participants viewed emotional faces (automatic response cue) paired with vignettes (effortful response cue) that signalled a different emotion category and were instructed to discriminate the manifest emotion. Patients made less vignette and more face responses than controls. However, the relationship between group and FVT responding was moderated by IQ and reading comprehension ability. These results replicate and extend previous research and provide tentative support for abnormal conflict resolution between automatic and effortful emotional processing predicted by dual-stream models of SCZ.

  11. Adult survival, apparent lamb survival, and body condition of desert bighorn sheep in relation to habitat and precipitation on the Kofa National Wildlife Refuge, Arizona

    USGS Publications Warehouse

    Overstreet, Matthew; Caldwell, Colleen A.; Cain, James W.

    2014-01-01

    The decline of desert bighorn sheep on the Kofa National Wildlife Refuge (KNWR) beginning in 2003 stimulated efforts to determine the factors limiting survival and recruitment. We 1) determined pregnancy rates, body fat, and estimated survival rates of adults and lambs; 2) investigated the relationship between precipitation, forage conditions, previous year’s reproductive success, and adult body condition; 3) assessed the relative influence of body condition of adult females, precipitation, and forage characteristics on apparent survival of lambs; and 4) determined the prevalence of disease. To assess the influence of potential limiting factors on female desert bighorn sheep on the KNWR, we modeled percent body fat of adult females as a function of previous year’s reproductive effort, age class, and forage conditions (i.e., seasonal NDVI and seasonal precipitation). In addition, we assessed the relative influence of the body condition of adult females, precipitation, and forage conditions (NDVI) on length of time a lamb was observed at heel.Adult female survival was high in both 2009 (0.90 [SE = 0.05]) and 2010 (0.96 [SE = 0.03]). Apparent lamb survival to 6 months of age was 0.23 (SE = 0.05) during 2009-2010 and 0.21 (SE = 0.05) during 2010-2011 lambing seasons. Mean body fat for adult females was 12.03% (SE = 0.479) in 2009-2010 and 11.11% (SE= 0.486) in 2010-2011 and was not significantly different between years. Pregnancy rate was 100% in 2009 and 97.5% in 2010.Models containing the previous year’s reproductive effort, spring NDVI and previous year’s reproductive effort and spring precipitation best approximated data on percent body fat in adult females in 2009-2010. In 2010-2011, the two highest-ranking models included the previous year’s reproductive effort and winter NDVI and previous year’s reproductive effort, and winter and spring NDVI. None of the models assessing the influence of maternal body fat, precipitation, or forage conditions were particularly useful for predicting apparent lamb survival.The high pregnancy rates and body fat levels in excess of 11% do not indicate that this population of desert bighorn was nutritionally stressed during our study and are thus likely not contributing to the low lamb survival estimates we observed. However, body condition data during the population decline is not available and whether this population was nutritionally limited during the initial population decline remains unknown.The prevalence of disease in the Kofa herd may be a limiting factor; however, due to a lack of disease monitoring during the population decline it is uncertain if disease contributed to the decline. Further research is needed to fully understand the complex interaction of disease in this population at the individual and population level and determine to what extent disease predisposes individuals to predation or other causes of mortality.

  12. Cost Modeling for Space Optical Telescope Assemblies

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2011-01-01

    Parametric cost models are used to plan missions, compare concepts and justify technology investments. This paper reviews an on-going effort to develop cost modes for space telescopes. This paper summarizes the methodology used to develop cost models and documents how changes to the database have changed previously published preliminary cost models. While the cost models are evolving, the previously published findings remain valid: it costs less per square meter of collecting aperture to build a large telescope than a small telescope; technology development as a function of time reduces cost; and lower areal density telescopes cost more than more massive telescopes.

  13. Behavioral modeling of human choices reveals dissociable effects of physical effort and temporal delay on reward devaluation.

    PubMed

    Klein-Flügge, Miriam C; Kennerley, Steven W; Saraiva, Ana C; Penny, Will D; Bestmann, Sven

    2015-03-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort-guided choice and for understanding the deficits in clinical disorders characterized by behavioral inactivity.

  14. Behavioral Modeling of Human Choices Reveals Dissociable Effects of Physical Effort and Temporal Delay on Reward Devaluation

    PubMed Central

    Klein-Flügge, Miriam C.; Kennerley, Steven W.; Saraiva, Ana C.; Penny, Will D.; Bestmann, Sven

    2015-01-01

    There has been considerable interest from the fields of biology, economics, psychology, and ecology about how decision costs decrease the value of rewarding outcomes. For example, formal descriptions of how reward value changes with increasing temporal delays allow for quantifying individual decision preferences, as in animal species populating different habitats, or normal and clinical human populations. Strikingly, it remains largely unclear how humans evaluate rewards when these are tied to energetic costs, despite the surge of interest in the neural basis of effort-guided decision-making and the prevalence of disorders showing a diminished willingness to exert effort (e.g., depression). One common assumption is that effort discounts reward in a similar way to delay. Here we challenge this assumption by formally comparing competing hypotheses about effort and delay discounting. We used a design specifically optimized to compare discounting behavior for both effort and delay over a wide range of decision costs (Experiment 1). We then additionally characterized the profile of effort discounting free of model assumptions (Experiment 2). Contrary to previous reports, in both experiments effort costs devalued reward in a manner opposite to delay, with small devaluations for lower efforts, and progressively larger devaluations for higher effort-levels (concave shape). Bayesian model comparison confirmed that delay-choices were best predicted by a hyperbolic model, with the largest reward devaluations occurring at shorter delays. In contrast, an altogether different relationship was observed for effort-choices, which were best described by a model of inverse sigmoidal shape that is initially concave. Our results provide a novel characterization of human effort discounting behavior and its first dissociation from delay discounting. This enables accurate modelling of cost-benefit decisions, a prerequisite for the investigation of the neural underpinnings of effort-guided choice and for understanding the deficits in clinical disorders characterized by behavioral inactivity. PMID:25816114

  15. [Psychometric properties of the French version of the Effort-Reward Imbalance model].

    PubMed

    Niedhammer, I; Siegrist, J; Landre, M F; Goldberg, M; Leclerc, A

    2000-10-01

    Two main models are currently used to evaluate psychosocial factors at work: the Job Strain model developed by Karasek and the Effort-Reward Imbalance model. A French version of the first model has been validated for the dimensions of psychological demands and decision latitude. As regards the second one evaluating three dimensions (extrinsic effort, reward, and intrinsic effort), there are several versions in different languages, but until recently there was no validated French version. The objective of this study was to explore the psychometric properties of the French version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminant validity. The present study was based on the GAZEL cohort and included the 10 174 subjects who were working at the French national electric and gas company (EDF-GDF) and answered the questionnaire in 1998. A French version of Effort-Reward Imbalance was included in this questionnaire. This version was obtained by a standard forward/backward translation procedure. Internal consistency was satisfactory for the three scales of extrinsic effort, reward, and intrinsic effort: Cronbach's Alpha coefficients higher than 0.7 were observed. A one-factor solution was retained for the factor analysis of the scale of extrinsic effort. A three-factor solution was retained for the factor analysis of reward, and these dimensions were interpreted as the factor analysis of intrinsic effort did not support the expected four-dimension structure. The analysis of discriminant validity displayed significant associations between measures of Effort-Reward Imbalance and the variables of sex, age, education level, and occupational grade. This study is the first one supporting satisfactory psychometric properties of the French version of the Effort-Reward Imbalance model. However, the factorial validity of intrinsic effort could be questioned. Furthermore, as most previous studies were based on male samples working in specific occupations, the present one is also one of the first to show strong associations between measures of this model and social class variables in a population of men and women employed in various occupations.

  16. In-Vehicle Information Systems Behavioral Model and Design Support: Final Report

    DOT National Transportation Integrated Search

    2000-02-16

    A great deal of effort went into producing both the model and the prototype software for this contract. The purpose of this final report is not to duplicate the information provided about these and other topics in previous reports. The purpose is to ...

  17. A Comprehensive Model of the Near-Earth Magnetic Field. Phase 3

    NASA Technical Reports Server (NTRS)

    Sabaka, Terence J.; Olsen, Nils; Langel, Robert A.

    2000-01-01

    The near-Earth magnetic field is due to sources in Earth's core, ionosphere, magnetosphere, lithosphere, and from coupling currents between ionosphere and magnetosphere and between hemispheres. Traditionally, the main field (low degree internal field) and magnetospheric field have been modeled simultaneously, and fields from other sources modeled separately. Such a scheme, however, can introduce spurious features. A new model, designated CMP3 (Comprehensive Model: Phase 3), has been derived from quiet-time Magsat and POGO satellite measurements and observatory hourly and annual means measurements as part of an effort to coestimate fields from all of these sources. This model represents a significant advancement in the treatment of the aforementioned field sources over previous attempts, and includes an accounting for main field influences on the magnetosphere, main field and solar activity influences on the ionosphere, seasonal influences on the coupling currents, a priori characterization of ionospheric and magnetospheric influence on Earth-induced fields, and an explicit parameterization and estimation of the lithospheric field. The result of this effort is a model whose fits to the data are generally superior to previous models and whose parameter states for the various constituent sources are very reasonable.

  18. Effort test failure: toward a predictive model.

    PubMed

    Webb, James W; Batchelor, Jennifer; Meares, Susanne; Taylor, Alan; Marsh, Nigel V

    2012-01-01

    Predictors of effort test failure were examined in an archival sample of 555 traumatically brain-injured (TBI) adults. Logistic regression models were used to examine whether compensation-seeking, injury-related, psychological, demographic, and cultural factors predicted effort test failure (ETF). ETF was significantly associated with compensation-seeking (OR = 3.51, 95% CI [1.25, 9.79]), low education (OR:. 83 [.74, . 94]), self-reported mood disorder (OR: 5.53 [3.10, 9.85]), exaggerated displays of behavior (OR: 5.84 [2.15, 15.84]), psychotic illness (OR: 12.86 [3.21, 51.44]), being foreign-born (OR: 5.10 [2.35, 11.06]), having sustained a workplace accident (OR: 4.60 [2.40, 8.81]), and mild traumatic brain injury severity compared with very severe traumatic brain injury severity (OR: 0.37 [0.13, 0.995]). ETF was associated with a broader range of statistical predictors than has previously been identified and the relative importance of psychological and behavioral predictors of ETF was evident in the logistic regression model. Variables that might potentially extend the model of ETF are identified for future research efforts.

  19. Cost Modeling for Space Telescope

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2011-01-01

    Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.

  20. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996--Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeul, Vincent R.; Cole, Charles R.; Bergeron, Marcel P.

    2001-08-29

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures andmore » parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty. These results, however, indicate that additional improvements are required to the conceptual model framework. An investigation was initiated at the end of this basalt inverse modeling effort to determine whether facies-based zonation would improve specific yield parameter estimation results (ACM-2). A description of the justification and methodology to develop this zonation is discussed.« less

  1. Use of Maple Seeding Canopy Reflectance Dataset for Validation of SART/LEAFMOD Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Bond, Barbara J.; Peterson, David L.

    1999-01-01

    This project was a collaborative effort by researchers at ARC, OSU and the University of Arizona. The goal was to use a dataset obtained from a previous study to "empirically validate a new canopy radiative-transfer model (SART) which incorporates a recently-developed leaf-level model (LEAFMOD)". The document includes a short research summary.

  2. Teaching Complex Dynamic Systems to Young Students with StarLogo

    ERIC Educational Resources Information Center

    Klopfer, Eric; Yoon, Susan; Um, Tricia

    2005-01-01

    In this paper, we report on a program of study called Adventures in Modeling that challenges the traditional scientific method approach in science classrooms using StarLogo modeling software. Drawing upon previous successful efforts with older students, and the related work of other projects working with younger students, we explore: (a) What can…

  3. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  4. A Variable-Instar Climate-Driven Individual Beetle-Based Phenology Model for the Invasive Asian Longhorned Beetle (Coleoptera: Cerambycidae).

    PubMed

    Trotter, R Talbot; Keena, Melody A

    2016-12-01

    Efforts to manage and eradicate invasive species can benefit from an improved understanding of the physiology, biology, and behavior of the target species, and ongoing efforts to eradicate the Asian longhorned beetle (Anoplophora glabripennis Motschulsky) highlight the roles this information may play. Here, we present a climate-driven phenology model for A. glabripennis that provides simulated life-tables for populations of individual beetles under variable climatic conditions that takes into account the variable number of instars beetles may undergo as larvae. Phenology parameters in the model are based on a synthesis of published data and studies of A. glabripennis, and the model output was evaluated using a laboratory-reared population maintained under varying temperatures mimicking those typical of Central Park in New York City. The model was stable under variations in population size, simulation length, and the Julian dates used to initiate individual beetles within the population. Comparison of model results with previously published field-based phenology studies in native and invasive populations indicates both this new phenology model, and the previously published heating-degree-day model show good agreement in the prediction of the beginning of the flight season for adults. However, the phenology model described here avoids underpredicting the cumulative emergence of adults through the season, in addition to providing tables of life stages and estimations of voltinism for local populations. This information can play a key role in evaluating risk by predicting the potential for population growth, and may facilitate the optimization of management and eradication efforts. Published by Oxford University Press on behalf of Entomological Society of America 2016. This work is written by US Government employees and is in the public domain in the US.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, Christopher J.; Freels, James D.; Hobbs, Randy W.

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 ( 238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 ( 237Np) dioxide (NpO 2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated inmore » a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO 2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.« less

  6. Psychological and neural mechanisms associated with effort-related cardiovascular reactivity and cognitive control: An integrative approach.

    PubMed

    Silvestrini, Nicolas

    2017-09-01

    Numerous studies have assessed cardiovascular (CV) reactivity as a measure of effort mobilization during cognitive tasks. However, psychological and neural processes underlying effort-related CV reactivity are still relatively unclear. Previous research reliably found that CV reactivity during cognitive tasks is mainly determined by one region of the brain, the dorsal anterior cingulate cortex (dACC), and that this region is systematically engaged during cognitively demanding tasks. The present integrative approach builds on the research on cognitive control and its brain correlates that shows that dACC function can be related to conflict monitoring and integration of information related to task difficulty and success importance-two key variables in determining effort mobilization. In contrast, evidence also indicates that executive cognitive functioning is processed in more lateral regions of the prefrontal cortex. The resulting model suggests that, when automatic cognitive processes are insufficient to sustain behavior, the dACC determines the amount of required and justified effort according to task difficulty and success importance, which leads to proportional adjustments in CV reactivity and executive cognitive functioning. These propositions are discussed in relation to previous findings on effort-related CV reactivity and cognitive performance, new predictions for future studies, and relevance for other self-regulatory processes. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Publication bias and the limited strength model of self-control: has the evidence for ego depletion been overestimated?

    PubMed

    Carter, Evan C; McCullough, Michael E

    2014-01-01

    Few models of self-control have generated as much scientific interest as has the limited strength model. One of the entailments of this model, the depletion effect, is the expectation that acts of self-control will be less effective when they follow prior acts of self-control. Results from a previous meta-analysis concluded that the depletion effect is robust and medium in magnitude (d = 0.62). However, when we applied methods for estimating and correcting for small-study effects (such as publication bias) to the data from this previous meta-analysis effort, we found very strong signals of publication bias, along with an indication that the depletion effect is actually no different from zero. We conclude that until greater certainty about the size of the depletion effect can be established, circumspection about the existence of this phenomenon is warranted, and that rather than elaborating on the model, research efforts should focus on establishing whether the basic effect exists. We argue that the evidence for the depletion effect is a useful case study for illustrating the dangers of small-study effects as well as some of the possible tools for mitigating their influence in psychological science.

  8. Improved Temperature Dynamic Model of Turbine Subcomponents for Facilitation of Generalized Tip Clearance Control

    NASA Technical Reports Server (NTRS)

    Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo

    2004-01-01

    This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.

  9. Dopamine antagonism decreases willingness to expend physical, but not cognitive, effort: a comparison of two rodent cost/benefit decision-making tasks.

    PubMed

    Hosking, Jay G; Floresco, Stan B; Winstanley, Catharine A

    2015-03-01

    Successful decision making often requires weighing a given option's costs against its associated benefits, an ability that appears perturbed in virtually every severe mental illness. Animal models of such cost/benefit decision making overwhelmingly implicate mesolimbic dopamine in our willingness to exert effort for a larger reward. Until recently, however, animal models have invariably manipulated the degree of physical effort, whereas human studies of effort have primarily relied on cognitive costs. Dopamine's relationship to cognitive effort has not been directly examined, nor has the relationship between individuals' willingness to expend mental versus physical effort. It is therefore unclear whether willingness to work hard in one domain corresponds to willingness in the other. Here we utilize a rat cognitive effort task (rCET), wherein animals can choose to allocate greater visuospatial attention for a greater reward, and a previously established physical effort-discounting task (EDT) to examine dopaminergic and noradrenergic contributions to effort. The dopamine antagonists eticlopride and SCH23390 each decreased willingness to exert physical effort on the EDT; these drugs had no effect on willingness to exert mental effort for the rCET. Preference for the high effort option correlated across the two tasks, although this effect was transient. These results suggest that dopamine is only minimally involved in cost/benefit decision making with cognitive effort costs. The constructs of mental and physical effort may therefore comprise overlapping, but distinct, circuitry, and therapeutic interventions that prove efficacious in one effort domain may not be beneficial in another.

  10. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  11. Ultraviolet Communication for Medical Applications

    DTIC Science & Technology

    2015-06-01

    In the previous Phase I effort, Directed Energy Inc.’s (DEI) parent company Imaging Systems Technology (IST) demonstrated feasibility of several key...accurately model high path loss. Custom photon scatter code was rewritten for parallel execution on a graphics processing unit (GPU). The NVidia CUDA

  12. Brigade Combat Team the World’s Police: Understanding the United States Army Brigade Combat Team’s role in Developing Foreign Police

    DTIC Science & Technology

    2014-06-13

    September 2013, fifteen gunmen associated with the terrorist group Al- Shabaab conducted an attack in the Westgate Shopping Mall in Nairobi, Kenya. The... theory , current U.S. Army doctrine, and the lessons learned from the police development efforts in Iraq, a foreign police development model is...civilian police enablers and an understanding of police theory and the lessons learned from previous police reform efforts. 15. SUBJECT TERMS

  13. A model of strategic marketing alliances for hospices: vertical, internal, osmotic alliances and the complete model.

    PubMed

    Starnes, B J; Self, D R

    1999-01-01

    This article develops two previous research efforts. William J. Winston (1994, 1995) has proposed a set of strategies by which health care organizations can benefit from forging strategic alliances. Raadt and Self (1997) have proposed a classification model of alliances including horizontal, vertical, internal, and osmotic. In the second of two articles, this paper presents a model of vertical, internal, and osmotic alliances. Advantages and disadvantages of each are discussed. Finally, the complete alliance system model is presented.

  14. Sensitivity to cognitive effort mediates psychostimulant effects on a novel rodent cost/benefit decision-making task.

    PubMed

    Cocker, Paul J; Hosking, Jay G; Benoit, James; Winstanley, Catharine A

    2012-07-01

    Amotivational states and insufficient recruitment of mental effort have been observed in a variety of clinical populations, including depression, traumatic brain injury, post-traumatic stress disorder, and attention deficit hyperactivity disorder. Previous rodent models of effort-based decision making have utilized physical costs whereas human studies of effort are primarily cognitive in nature, and it is unclear whether the two types of effortful decision making are underpinned by the same neurobiological processes. We therefore designed a novel rat cognitive effort task (rCET) based on the 5-choice serial reaction time task, a well-validated measure of attention and impulsivity. Within each trial of the rCET, rats are given the choice between an easy or hard visuospatial discrimination, and successful hard trials are rewarded with double the number of sugar pellets. Similar to previous human studies, stable individual variation in choice behavior was observed, with 'workers' choosing hard trials significantly more than their 'slacker' counterparts. Whereas workers 'slacked off' in response to administration of amphetamine and caffeine, slackers 'worked harder' under amphetamine, but not caffeine. Conversely, these stimulants increased motor impulsivity in all animals. Ethanol did not affect animals' choice but invigorated behavior. In sum, we have shown for the first time that rats are differentially sensitive to cognitive effort when making decisions, independent of other processes such as impulsivity, and these baseline differences can influence the cognitive response to psychostimulants. Such findings could inform our understanding of impairments in effort-based decision making and contribute to treatment development.

  15. Sensitivity to Cognitive Effort Mediates Psychostimulant Effects on a Novel Rodent Cost/Benefit Decision-Making Task

    PubMed Central

    Cocker, Paul J; Hosking, Jay G; Benoit, James; Winstanley, Catharine A

    2012-01-01

    Amotivational states and insufficient recruitment of mental effort have been observed in a variety of clinical populations, including depression, traumatic brain injury, post-traumatic stress disorder, and attention deficit hyperactivity disorder. Previous rodent models of effort-based decision making have utilized physical costs whereas human studies of effort are primarily cognitive in nature, and it is unclear whether the two types of effortful decision making are underpinned by the same neurobiological processes. We therefore designed a novel rat cognitive effort task (rCET) based on the 5-choice serial reaction time task, a well-validated measure of attention and impulsivity. Within each trial of the rCET, rats are given the choice between an easy or hard visuospatial discrimination, and successful hard trials are rewarded with double the number of sugar pellets. Similar to previous human studies, stable individual variation in choice behavior was observed, with ‘workers' choosing hard trials significantly more than their ‘slacker' counterparts. Whereas workers ‘slacked off' in response to administration of amphetamine and caffeine, slackers ‘worked harder' under amphetamine, but not caffeine. Conversely, these stimulants increased motor impulsivity in all animals. Ethanol did not affect animals' choice but invigorated behavior. In sum, we have shown for the first time that rats are differentially sensitive to cognitive effort when making decisions, independent of other processes such as impulsivity, and these baseline differences can influence the cognitive response to psychostimulants. Such findings could inform our understanding of impairments in effort-based decision making and contribute to treatment development. PMID:22453140

  16. Aqueous chloride stress corrosion cracking of titanium - A comparison with environmental hydrogen embrittlement

    NASA Technical Reports Server (NTRS)

    Nelson, H. G.

    1974-01-01

    The physical characteristics of stress corrosion cracking of titanium in an aqueous chloride environment are compared with those of embrittlement of titanium by a gaseous hydrogen environment in an effort to help contribute to the understanding of the possible role of hydrogen in the complex stress corrosion cracking process. Based on previous studies, the two forms of embrittlement are shown to be similar at low hydrogen pressures (100 N/sq m) but dissimilar at higher hydrogen pressures. In an effort to quantify this comparison, tests were conducted in an aqueous chloride solution using the same material and test techniques as had previously been employed in a gaseous hydrogen environment. The results of these tests strongly support models based on hydrogen as the embrittling species in an aqueous chloride environment.

  17. An iterative and targeted sampling design informed by habitat suitability models for detecting focal plant species over extensive areas.

    PubMed

    Wang, Ophelia; Zachmann, Luke J; Sesnie, Steven E; Olsson, Aaryn D; Dickson, Brett G

    2014-01-01

    Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods implemented here provide a meaningful tool when understanding the potential distribution and habitat of species over multi-jurisdictional and extensive areas is needed for achieving management objectives.

  18. An Iterative and Targeted Sampling Design Informed by Habitat Suitability Models for Detecting Focal Plant Species over Extensive Areas

    PubMed Central

    Wang, Ophelia; Zachmann, Luke J.; Sesnie, Steven E.; Olsson, Aaryn D.; Dickson, Brett G.

    2014-01-01

    Prioritizing areas for management of non-native invasive plants is critical, as invasive plants can negatively impact plant community structure. Extensive and multi-jurisdictional inventories are essential to prioritize actions aimed at mitigating the impact of invasions and changes in disturbance regimes. However, previous work devoted little effort to devising sampling methods sufficient to assess the scope of multi-jurisdictional invasion over extensive areas. Here we describe a large-scale sampling design that used species occurrence data, habitat suitability models, and iterative and targeted sampling efforts to sample five species and satisfy two key management objectives: 1) detecting non-native invasive plants across previously unsampled gradients, and 2) characterizing the distribution of non-native invasive plants at landscape to regional scales. Habitat suitability models of five species were based on occurrence records and predictor variables derived from topography, precipitation, and remotely sensed data. We stratified and established field sampling locations according to predicted habitat suitability and phenological, substrate, and logistical constraints. Across previously unvisited areas, we detected at least one of our focal species on 77% of plots. In turn, we used detections from 2011 to improve habitat suitability models and sampling efforts in 2012, as well as additional spatial constraints to increase detections. These modifications resulted in a 96% detection rate at plots. The range of habitat suitability values that identified highly and less suitable habitats and their environmental conditions corresponded to field detections with mixed levels of agreement. Our study demonstrated that an iterative and targeted sampling framework can address sampling bias, reduce time costs, and increase detections. Other studies can extend the sampling framework to develop methods in other ecosystems to provide detection data. The sampling methods implemented here provide a meaningful tool when understanding the potential distribution and habitat of species over multi-jurisdictional and extensive areas is needed for achieving management objectives. PMID:25019621

  19. Understanding the Association Between Negative Symptoms and Performance on Effort-Based Decision-Making Tasks: The Importance of Defeatist Performance Beliefs.

    PubMed

    Reddy, L Felice; Horan, William P; Barch, Deanna M; Buchanan, Robert W; Gold, James M; Marder, Stephen R; Wynn, Jonathan K; Young, Jared; Green, Michael F

    2017-11-13

    Effort-based decision-making paradigms are increasingly utilized to gain insight into the nature of motivation deficits. Research has shown associations between effort-based decision making and experiential negative symptoms; however, the associations are not consistent. The current study had two primary goals. First, we aimed to replicate previous findings of a deficit in effort-based decision making among individuals with schizophrenia on a test of cognitive effort. Second, in a large sample combined from the current and a previous study, we sought to examine the association between negative symptoms and effort by including the related construct of defeatist beliefs. The results replicated previous findings of impaired cognitive effort-based decision making in schizophrenia. Defeatist beliefs significantly moderated the association between negative symptoms and effort-based decision making such that there was a strong association between high negative symptoms and deficits in effort-based decision making, but only among participants with high levels of defeatist beliefs. Thus, our findings suggest the relationship between negative symptoms and effort performance may be understood by taking into account the role of defeatist beliefs, and finding that might explain discrepancies in previous studies. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2017.

  20. Design, fabrication and test of a trace contaminant control system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.

  1. Establishing conservation baselines with dynamic distribution models for bat populations facing imminent decline

    USGS Publications Warehouse

    Rodhouse, Thomas J.; Ormsbee, Patricia C.; Irvine, Kathryn M.; Vierling, Lee A.; Szewczak, Joseph M.; Vierling, Kerri T.

    2015-01-01

    Landscape keystone structures associated with roosting habitat emerged as regionally important predictors of bat distributions. The challenges of bat monitoring have constrained previous species distribution modelling efforts to temporally static presence-only approaches. Our approach extends to broader spatial and temporal scales than has been possible in the past for bats, making a substantial increase in capacity for bat conservation.

  2. Near-shore and off-shore habitat use by endangered juvenile Lost River and Shortnose Suckers in Upper Klamath Lake, Oregon: 2006 data summary

    USGS Publications Warehouse

    Burdick, Summer M.; Wilkens, Alexander X.; VanderKooi, Scott P.

    2008-01-01

    We continued sampling juvenile suckers in 2006 as part of an effort to develop bioenergetics models for juvenile Lost River and shortnose suckers. This study required us to collect fish to determine growth rates and energy content of juvenile suckers. We followed the sampling protocols and methods described by Hendrixson et al. (2007b) to maintain continuity and facilitate comparisons with data collected in recent years, but sampled at a reduced level of effort compared to previous years (approximately one-third) due to limited funding. Here we present a summary of catch data collected in 2006. Bioenergetics models will be reported separately

  3. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  4. Multi-component testing using HZ-PAN and AgZ-PAN Sorbents for OSPREY Model validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garn, Troy G.; Greenhalgh, Mitchell; Lyon, Kevin L.

    2015-04-01

    In efforts to further develop the capability of the Off-gas SeParation and RecoverY (OSPREY) model, multi-component tests were completed using both HZ-PAN and AgZ-PAN sorbents. The primary purpose of this effort was to obtain multi-component xenon and krypton capacities for comparison to future OSPREY predicted multi-component capacities using previously acquired Langmuir equilibrium parameters determined from single component isotherms. Experimental capacities were determined for each sorbent using two feed gas compositions of 1000 ppmv xenon and 150 ppmv krypton in either a helium or air balance. Test temperatures were consistently held at 220 K and the gas flowrate was 50 sccm.more » Capacities were calculated from breakthrough curves using TableCurve® 2D software by Jandel Scientific. The HZ-PAN sorbent was tested in the custom designed cryostat while the AgZ-PAN was tested in a newly installed cooling apparatus. Previous modeling validation efforts indicated the OSPREY model can be used to effectively predict single component xenon and krypton capacities for both engineered form sorbents. Results indicated good agreement with the experimental and predicted capacity values for both krypton and xenon on the sorbents. Overall, the model predicted slightly elevated capacities for both gases which can be partially attributed to the estimation of the parameters and the uncertainty associated with the experimental measurements. Currently, OSPREY is configured such that one species adsorbs and one does not (i.e. krypton in helium). Modification of OSPREY code is currently being performed to incorporate multiple adsorbing species and non-ideal interactions of gas phase species with the sorbent and adsorbed phases. Once these modifications are complete, the sorbent capacities determined in the present work will be used to validate OSPREY multicomponent adsorption predictions.« less

  5. Response to conflict among wilderness visitors

    Treesearch

    Ingrid Schneider

    2000-01-01

    Previous conceptual efforts suggest that response to recreational conflict should be framed within an adapted stresscoping response model. An important element in understanding response to conflict is the context of the experience. A basic underlying component of the wilderness experience is privacy, which indicates wilderness visitors are interested in releasing—...

  6. Three Reflections on Assessing Safety Training Needs: A Case Study

    ERIC Educational Resources Information Center

    Sleezer, Catherine M.; Kelsey, Kathleen D.; Wood, Thomas E.

    2008-01-01

    Needs assessment plays an important role in training and human performance improvement efforts, but the literature contains little research on this topic. This study extended previous research on the Performance Analysis for Training (PAT) model of needs assessment by examining its implementation to determine environmental and occupational health…

  7. Engine Structural Analysis Software

    NASA Technical Reports Server (NTRS)

    McKnight, R. L.; Maffeo, R. J.; Schrantz, S.; Hartle, M. S.; Bechtel, G. S.; Lewis, K.; Ridgway, M.; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The report describes the technical effort to develop: (1) geometry recipes for nozzles, inlets, disks, frames, shafts, and ducts in finite element form, (2) component design tools for nozzles, inlets, disks, frames, shafts, and ducts which utilize the recipes and (3) an integrated design tool which combines the simulations of the nozzles, inlets, disks, frames, shafts, and ducts with the previously developed combustor, turbine blade, and turbine vane models for a total engine representation. These developments will be accomplished in cooperation and in conjunction with comparable efforts of NASA Glenn Research Center.

  8. A Comprehensive Theory of Integration.

    PubMed

    Singer, Sara J; Kerrissey, Michaela; Friedberg, Mark; Phillips, Russell

    2018-03-01

    Efforts to transform health care delivery to improve care have increasingly focused on care integration. However, variation in how integration is defined has complicated efforts to design, synthesize, and compare studies of integration in health care. Evaluations of integration initiatives would be enhanced by describing them according to clear definitions of integration and specifying which empirical relationships they seek to test-whether among types of integration or between integration and outcomes of care. Drawing on previous work, we present a comprehensive theoretical model of relationships between types of integration and propose how to measure them.

  9. Modeling of Texture Evolution During Hot Forging of Alpha/Beta Titanium Alloys (Preprint)

    DTIC Science & Technology

    2007-06-01

    treatment. The approach was validated via an industrial -scale trail comprising hot pancake forging of Ti- 6Al-4V. 15. SUBJECT TERMS titanium... industrial -scale trial comprising hot pancake forging of Ti-6Al-4V. Keywords: Titanium, Texture, Modeling, Strain Partitioning, Variant Selection... industrial -scale forging of Ti- 6Al-4V. 2. Background A brief review of pertinent previous efforts in the area of texture modeling is presented below

  10. Detailed Validation of the Bidirectional Effect in Various Case 1 and Case 2 Waters

    DTIC Science & Technology

    2012-03-26

    of the viewing direction, i.e., they assumed a completely diffuse BRDF . Previous efforts to model / understand the actual BRDF [4-10] have produced...places. Second, the MAG2002 BRDF tables were developed from a radiative transfer (RT) model that used scattering particle phase functions that...situ measurements from just 3 locations to validate their model ; here we used a much larger data set across a wide variety of inherent optical

  11. Modeling Regional Seismic Waves from Underground Nuclear Explosion

    DTIC Science & Technology

    1989-05-15

    consider primarily the long-period tangenital motions in this pilot study because less computational effort is involved compared to modeling the P-SV system...error testing can be a time- consuming endeavor but the basic approach has proven effective in previous studies (Vidale et aL, 1985; Helmberger and Vidale...at various depths in a variety of basin models were generated to test the above hypothesis. When the source is situated in the sediments and when the

  12. “All Models Are Wrong, but Some Are Useful”

    USGS Publications Warehouse

    Field, Edward H.

    2015-01-01

    Building a new model, especially one used for policy purposes, takes considerable time, effort, and resources. In justifying such expenditures, one inevitably spends a lot of time denigrating previous models. For example, in pitching the third Uniform California Earthquake Rupture Forecast (UCERF3) (http://www.WGCEP.org/UCERF3), criticisms of the previous model included fault‐segmentation assumptions and the lack of multifault ruptures. In the context of including spatiotemporal clustering for operational earthquake forecasting (e.g., Jordan et al., 2011), another criticism has been that previous candidate models not only ignore elastic rebound but also produce results that are antithetical to that theory. For instance, the short‐term earthquake probabilities model (Gerstenberger et al., 2005), which provided California aftershock hazard maps at the U.S. Geological Survey web site between 2005 and 2010, implies that the time of highest likelihood for any rupture will be the moment after it occurs, even for a big one on the San Andreas fault. Furthermore, Monte Carlo simulations imply that excluding elastic rebound in such models also produces unrealistic triggering statistics (Field, 2012).

  13. Explosive particle soil surface dispersion model for detonated military munitions.

    PubMed

    Hathaway, John E; Rishel, Jeremy P; Walsh, Marianne E; Walsh, Michael R; Taylor, Susan

    2015-07-01

    The accumulation of high explosive mass residue from the detonation of military munitions on training ranges is of environmental concern because of its potential to contaminate the soil, surface water, and groundwater. The US Department of Defense wants to quantify, understand, and remediate high explosive mass residue loadings that might be observed on active firing ranges. Previously, efforts using various sampling methods and techniques have resulted in limited success, due in part to the complicated dispersion pattern of the explosive particle residues upon detonation. In our efforts to simulate particle dispersal for high- and low-order explosions on hypothetical firing ranges, we use experimental particle data from detonations of munitions from a 155-mm howitzer, which are common military munitions. The mass loadings resulting from these simulations provide a previously unattained level of detail to quantify the explosive residue source-term for use in soil and water transport models. In addition, the resulting particle placements can be used to test, validate, and optimize particle sampling methods and statistical models as applied to firing ranges. Although the presented results are for a hypothetical 155-mm howitzer firing range, the method can be used for other munition types once the explosive particle characteristics are known.

  14. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    PubMed Central

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  15. A swallowtail catastrophe model for the emergence of leadership in coordination-intensive groups.

    PubMed

    Guastello, Stephen J; Bond, Robert W

    2007-04-01

    This research extended the previous studies concerning the swallowtail catastrophe model for leadership emergence to coordination-intensive groups. Thirteen 4-person groups composed of undergraduates played in Intersection coordination (card game) task and were allowed to talk while performing it; 13 other groups worked nonverbally. A questionnaire measured leadership emergence at the end of the game along with other social contributions to the groups' efforts. The swallowtail catastrophe model that was evident in previous leadership emergence phenomena in creative problem solving and production groups was found here also. All three control parameters were identified: a general participation variable that was akin to K in the rugged landscape model of self-organization, task control, and whether the groups worked verbally or nonverbally. Several new avenues for future research were delineated.

  16. All-in-one model for designing optimal water distribution pipe networks

    NASA Astrophysics Data System (ADS)

    Aklog, Dagnachew; Hosoi, Yoshihiko

    2017-05-01

    This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.

  17. Using a cloud to replenish parched groundwater modeling efforts.

    PubMed

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  18. Using a cloud to replenish parched groundwater modeling efforts

    USGS Publications Warehouse

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  19. A model to estimate cost-savings in diabetic foot ulcer prevention efforts.

    PubMed

    Barshes, Neal R; Saedi, Samira; Wrobel, James; Kougias, Panos; Kundakcioglu, O Erhun; Armstrong, David G

    2017-04-01

    Sustained efforts at preventing diabetic foot ulcers (DFUs) and subsequent leg amputations are sporadic in most health care systems despite the high costs associated with such complications. We sought to estimate effectiveness targets at which cost-savings (i.e. improved health outcomes at decreased total costs) might occur. A Markov model with probabilistic sensitivity analyses was used to simulate the five-year survival, incidence of foot complications, and total health care costs in a hypothetical population of 100,000 people with diabetes. Clinical event and cost estimates were obtained from previously-published trials and studies. A population without previous DFU but with 17% neuropathy and 11% peripheral artery disease (PAD) prevalence was assumed. Primary prevention (PP) was defined as reducing initial DFU incidence. PP was more than 90% likely to provide cost-savings when annual prevention costs are less than $50/person and/or annual DFU incidence is reduced by at least 25%. Efforts directed at patients with diabetes who were at moderate or high risk for DFUs were very likely to provide cost-savings if DFU incidence was decreased by at least 10% and/or the cost was less than $150 per person per year. Low-cost DFU primary prevention efforts producing even small decreases in DFU incidence may provide the best opportunity for cost-savings, especially if focused on patients with neuropathy and/or PAD. Mobile phone-based reminders, self-identification of risk factors (ex. Ipswich touch test), and written brochures may be among such low-cost interventions that should be investigated for cost-savings potential. Published by Elsevier Inc.

  20. X-ray microscope for solidification studies

    NASA Technical Reports Server (NTRS)

    Kaukler, William

    1995-01-01

    This report covers the second 6 month period for the year March 1, 1994 to February 28, 1995. The material outlined in this semi-annual report continues from the previous semi-annual report. The Fein Focus Inc. x-ray source was delivered in September and coincides with the beginning of the second 6 month effort. As a result, and as outlined in the statement of work, this period was dedicated to the evaluation, testing and calibration of the x-ray source. In addition, in this period the modeling effort was continued and extended by the Tiger series of Monte-Carlo simulation programs for photon and electron interactions with materials obtained from the Oak Ridge RISC Library. Some further calculations were also made with the absorption model.

  1. X-ray microscope for solidification studies

    NASA Astrophysics Data System (ADS)

    Kaukler, William

    1995-02-01

    This report covers the second 6 month period for the year March 1, 1994 to February 28, 1995. The material outlined in this semi-annual report continues from the previous semi-annual report. The Fein Focus Inc. x-ray source was delivered in September and coincides with the beginning of the second 6 month effort. As a result, and as outlined in the statement of work, this period was dedicated to the evaluation, testing and calibration of the x-ray source. In addition, in this period the modeling effort was continued and extended by the Tiger series of Monte-Carlo simulation programs for photon and electron interactions with materials obtained from the Oak Ridge RISC Library. Some further calculations were also made with the absorption model.

  2. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  3. Improving a regional model using reduced complexity and parameter estimation.

    PubMed

    Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  4. Relevance of the Implementation of Teeth in Three-Dimensional Vocal Tract Models

    ERIC Educational Resources Information Center

    Traser, Louisa; Birkholz, Peter; Flügge, Tabea Viktoria; Kamberger, Robert; Burdumy, Michael; Richter, Bernhard; Korvink, Jan Gerrit; Echternach, Matthias

    2017-01-01

    Purpose: Recently, efforts have been made to investigate the vocal tract using magnetic resonance imaging (MRI). Due to technical limitations, teeth were omitted in many previous studies on vocal tract acoustics. However, the knowledge of how teeth influence vocal tract acoustics might be important in order to estimate the necessity of…

  5. The Challenge of Collaboration: Organizational Structure and Professional Identity

    ERIC Educational Resources Information Center

    Koester, Jolene; Hellenbrand, Harry; Piper, Terry D.

    2008-01-01

    In 2003, the California State University, Northridge (CSUN) undertook the challenge of becoming a learning-centered institution. In a 2005 article, the authors discussed how the learning-centered model at CSUN has renewed its previous, scattered efforts at student retention. In this article, the authors describe the transformation that occurred at…

  6. "Set Up to Fail": Institutional Racism and the Sabotage of School Improvement

    ERIC Educational Resources Information Center

    Taylor, Dianne L.; Clark, Menthia P.

    2009-01-01

    Data from two previous studies are reanalyzed using the lens of institutional racism to examine district decisions that undermined, or sabotaged, improvement efforts at schools attended by students of color. Opportunities to rectify the sabotage were available but not pursued. A model portrays the interaction between decision-maker intent,…

  7. Automated support for experience-based software management

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.

    1992-01-01

    To effectively manage a software development project, the software manager must have access to key information concerning a project's status. This information includes not only data relating to the project of interest, but also, the experience of past development efforts within the environment. This paper describes the concepts and functionality of a software management tool designed to provide this information. This tool, called the Software Management Environment (SME), enables the software manager to compare an ongoing development effort with previous efforts and with models of the 'typical' project within the environment, to predict future project status, to analyze a project's strengths and weaknesses, and to assess the project's quality. In order to provide these functions the tool utilizes a vast corporate memory that includes a data base of software metrics, a set of models and relationships that describe the software development environment, and a set of rules that capture other knowledge and experience of software managers within the environment. Integrating these major concepts into one software management tool, the SME is a model of the type of management tool needed for all software development organizations.

  8. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  9. Aerothermal Ground Testing of Flexible Thermal Protection Systems for Hypersonic Inflatable Aerodynamic Decelerators

    NASA Technical Reports Server (NTRS)

    Bruce, Walter E., III; Mesick, Nathaniel J.; Ferlemann, Paul G.; Siemers, Paul M., III; DelCorso, Joseph A.; Hughes, Stephen J.; Tobin, Steven A.; Kardell, Matthew P.

    2012-01-01

    Flexible TPS development involves ground testing and analysis necessary to characterize performance of the FTPS candidates prior to flight testing. This paper provides an overview of the analysis and ground testing efforts performed over the last year at the NASA Langley Research Center and in the Boeing Large-Core Arc Tunnel (LCAT). In the LCAT test series, material layups were subjected to aerothermal loads commensurate with peak re-entry conditions enveloping a range of HIAD mission trajectories. The FTPS layups were tested over a heat flux range from 20 to 50 W/cm with associated surface pressures of 3 to 8 kPa. To support the testing effort a significant redesign of the existing shear (wedge) model holder from previous testing efforts was undertaken to develop a new test technique for supporting and evaluating the FTPS in the high-temperature, arc jet flow. Since the FTPS test samples typically experience a geometry change during testing, computational fluid dynamic (CFD) models of the arc jet flow field and test model were developed to support the testing effort. The CFD results were used to help determine the test conditions experienced by the test samples as the surface geometry changes. This paper includes an overview of the Boeing LCAT facility, the general approach for testing FTPS, CFD analysis methodology and results, model holder design and test methodology, and selected thermal results of several FTPS layups.

  10. Dredging Equipment Modifications for Detection and Removal of Ordnance

    DTIC Science & Technology

    2006-12-01

    and numerically modeled to describe an underwa- ter munitions detonation within an enclosed hydraulic circuit similar to that found in a dredge...by a numerical modeling effort describing the poten- tial blast effects that can be associated with munitions passing into and through a modern...screen was subsequently removed and bars were welded on the cutterhead (as previously described in Umm Qsar ) to construct a “screen” with 7- cm (2.75

  11. Battery Calendar Life Estimator Manual Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon P. Christophersen; Ira Bloom; Ed Thomas

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  12. Battery Life Estimator Manual Linear Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon P. Christophersen; Ira Bloom; Ed Thomas

    2009-08-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  13. UpTempO Buoys for Understanding and Prediction

    DTIC Science & Technology

    2015-09-30

    to better understand the evolution of heat content in the upper Arctic Ocean within the Seasonal Ice Zone (SIZ), both seasonally during summer...warming and fall cooling, and interannually as sea ice retreats and the warming season lengthens. The effort is a contribution to the multi-investigator...along 140W on SIZRS flights. These were: • One 2013 model held from the previous field season • One 2014 model with spherical hull • Two 2014

  14. Groundwater Pathway Model for the Los Alamos National Laboratory Technical Area 54, Area G, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauffer, Philip H.; Chu, Shaoping; Miller, Terry A.

    This report consists of four major sections, including this introductory section. Section 2 provides an overview of previous investigations related to the development of the current sitescale model. The methods and data used to develop the 3-D groundwater model and the techniques used to distill that model into a form suitable for use in the GoldSim models are discussed in Section 3. Section 4 presents the results of the model development effort and discusses some of the uncertainties involved. Eight attachments that provide details about the components and data used in this groundwater pathway model are also included with thismore » report. The groundwater modeling effort reported here is a revision of the work that was conducted in 2005 (Stauffer et al., 2005a) in support of the 2008 Area G performance assessment and composite analysis (LANL, 2008). The revision effort was undertaken primarily to incorporate new geologic information that has been collected since 2003 at, and in the vicinity of, Area G. The new data were used to create a more accurate geologic framework model (GFM) that forms the basis of the numerical modeling of the site’s long-term performance. The groundwater modeling uses mean hydrologic properties of the geologic strata underlying Area G; this revision includes an evaluation of the impacts that natural variability in these properties may have on the model projections.« less

  15. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  16. Enjoying mathematics or feeling competent in mathematics? Reciprocal effects on mathematics achievement and perceived math effort expenditure.

    PubMed

    Pinxten, Maarten; Marsh, Herbert W; De Fraine, Bieke; Van Den Noortgate, Wim; Van Damme, Jan

    2014-03-01

    The multidimensionality of the academic self-concept in terms of domain specificity has been well established in previous studies, whereas its multidimensionality in terms of motivational functions (the so-called affect-competence separation) needs further examination. This study aims at exploring differential effects of enjoyment and competence beliefs on two external validity criteria in the field of mathematics. Data analysed in this study were part of a large-scale longitudinal research project. Following a five-wave design, math enjoyment, math competence beliefs, math achievement, and perceived math effort expenditure measures were repeatedly collected from a cohort of 4,724 pupils in Grades 3-7. Confirmatory factor analysis (CFA) was used to test the internal factor structure of the math self-concept. Additionally, a series of nested models was tested using structural equation modelling to examine longitudinal reciprocal interrelations between math competence beliefs and math enjoyment on the one hand and math achievement and perceived math effort expenditure on the other. Our results showed that CFA models with separate factors for math enjoyment and math competence beliefs fit the data substantially better than models without it. Furthermore, differential relationships between both constructs and the two educational outcomes were observed. Math competence beliefs had positive effects on math achievement and negative effects on perceived math effort expenditure. Math enjoyment had (mild) positive effects on subsequent perceived effort expenditure and math competence beliefs. This study provides further support for the affect-competence separation. Theoretical issues regarding adequate conceptualization and practical consequences for practitioners are discussed. © 2013 The British Psychological Society.

  17. Reliable Communication Models in Interdependent Critical Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun

    Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less

  18. The Site-Scale Saturated Zone Flow Model for Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.

    2006-12-01

    This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the previous model calibration. Specific discharge at a point 5~km from the repository is also examined and found to be within acceptable uncertainty. The results show that updated model yields a calibration with smaller residuals than the previous model revision while ensuring that flowpaths follow measured gradients and paths derived from hydrochemical analyses. This work was supported by the Yucca Mountain Site Characterization Office as part of the Civilian Radioactive Waste Management Program, which is managed by the U.S. Department of Energy, Yucca Mountain Site Characterization Project. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  19. Predictive Cache Modeling and Analysis

    DTIC Science & Technology

    2011-11-01

    metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing

  20. Modifying Taper-Derived Merchantable Height Estimates to Account for Tree Characteristics

    Treesearch

    James A. Westfall

    2006-01-01

    The U.S. Department of Agriculture Forest Service Northeastern Forest Inventory and Analysis program (NE-FIA) is developing regionwide tree-taper equations. Unlike most previous work on modeling tree form, this effort necessarily includes a wide array of tree species. For some species, branching patterns can produce undesirable tree form that reduces the merchantable...

  1. Predicting Student Success: A Naïve Bayesian Application to Community College Data

    ERIC Educational Resources Information Center

    Ornelas, Fermin; Ordonez, Carlos

    2017-01-01

    This research focuses on developing and implementing a continuous Naïve Bayesian classifier for GEAR courses at Rio Salado Community College. Previous implementation efforts of a discrete version did not predict as well, 70%, and had deployment issues. This predictive model has higher prediction, over 90%, accuracy for both at-risk and successful…

  2. A Cross-Cultural Three-Step Process Model for Assessing Motivational Interviewing Treatment Fidelity in Thailand

    ERIC Educational Resources Information Center

    Koken, Juline A.; Naar-King, Sylvie; Umasa, Sanya; Parsons, Jeffrey T.; Saengcharnchai, Pichai; Phanuphak, Praphan; Rongkavilit, Chokechai

    2012-01-01

    The provision of culturally relevant yet evidence-based interventions has become crucial to global HIV prevention and treatment efforts. In Thailand, where treatment for HIV has become widely available, medication adherence and risk behaviors remain an issue for Thai youth living with HIV. Previous research on motivational interviewing (MI) has…

  3. USING MM5V3 WITH ETA ANALYSES FOR AIR-QUALITY MODELING AT THE EPA

    EPA Science Inventory

    Efforts have been underway since MM5v3 was released in July 1999 to set up air-quality simulations using Eta analyses as background fields. Our previous simulations used a one-way quadruple-nested set of domains with horizontal grid spacing of 108, 36, 12 and 4 km. With Eta a...

  4. Simulation of Wake Vortex Radiometric Detection via Jet Exhaust Proxy

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.

    2015-01-01

    This paper describes an analysis of the potential of an airborne hyperspectral imaging IR instrument to infer wake vortices via turbine jet exhaust as a proxy. The goal was to determine the requirements for an imaging spectrometer or radiometer to effectively detect the exhaust plume, and by inference, the location of the wake vortices. The effort examines the gas spectroscopy of the various major constituents of turbine jet exhaust and their contributions to the modeled detectable radiance. Initially, a theoretical analysis of wake vortex proxy detection by thermal radiation was realized in a series of simulations. The first stage used the SLAB plume model to simulate turbine jet exhaust plume characteristics, including exhaust gas transport dynamics and concentrations. The second stage used these plume characteristics as input to the Line By Line Radiative Transfer Model (LBLRTM) to simulate responses from both an imaging IR hyperspectral spectrometer or radiometer. These numerical simulations generated thermal imagery that was compared with previously reported wake vortex temperature data. This research is a continuation of an effort to specify the requirements for an imaging IR spectrometer or radiometer to make wake vortex measurements. Results of the two-stage simulation will be reported, including instrument specifications for wake vortex thermal detection. These results will be compared with previously reported results for IR imaging spectrometer performance.

  5. Quantitative non-destructive testing

    NASA Technical Reports Server (NTRS)

    Welch, C. S.

    1985-01-01

    The work undertaken during this period included two primary efforts. The first is a continuation of theoretical development from the previous year of models and data analyses for NDE using the Optical Thermal Infra-Red Measurement System (OPTITHIRMS) system, which involves heat injection with a laser and observation of the resulting thermal pattern with an infrared imaging system. The second is an investigation into the use of the thermoelastic effect as an effective tool for NDE. As in the past, the effort is aimed towards NDE techniques applicable to composite materials in structural applications. The theoretical development described produced several models of temperature patterns over several geometries and material types. Agreement between model data and temperature observations was obtained. A model study with one of these models investigated some fundamental difficulties with the proposed method (the primitive equation method) for obtaining diffusivity values in plates of thickness and supplied guidelines for avoiding these difficulties. A wide range of computing speeds was found among the various models, with a one-dimensional model based on Laplace's integral solution being both very fast and very accurate.

  6. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  7. Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Shapiro, Gerald

    1998-01-01

    This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.

  8. Microgravity Investigation of Crew Reactions in 0-G (MICRO-G)

    NASA Technical Reports Server (NTRS)

    Newman, Dava; Coleman, Charles; Metaxas, Dimitri

    2004-01-01

    There is a need for a human factors, technology-based bioastronautics research effort to develop an integrated system that reduces risk and provides scientific knowledge of astronaut-induced loads and motions during long-duration missions on the International Space Station (ISS), which will lead to appropriate countermeasures. The primary objectives of the Microgravity Investigation of Crew Reactions in 0-G (MICRO-GI research effort are to quantify astronaut adaptation and movement as well as to model motor strategies for differing gravity environments. The overall goal of this research program is to improve astronaut performance and efficiency through the use of rigorous quantitative dynamic analysis, simulation and experimentation. The MICRO-G research effort provides a modular, kinetic and kinematic capability for the ISS. The collection and evaluation of kinematics (whole-body motion) and dynamics (reacting forces and torques) of astronauts within the ISS will allow for quantification of human motion and performance in weightlessness, gathering fundamental human factors information for design, scientific investigation in the field of dynamics and motor control, technological assessment of microgravity disturbances, and the design of miniaturized, real-time space systems. The proposed research effort builds on a strong foundation of successful microgravity experiments, namely, the EDLS (Enhanced Dynamics Load Sensors) flown aboard the Russian Mir space station (19961998) and the DLS (Dynamic Load Sensors) flown on Space Shuttle Mission STS-62. In addition, previously funded NASA ground-based research into sensor technology development and development of algorithms to produce three-dimensional (3-0) kinematics from video images have come to fruition and these efforts culminate in the proposed collaborative MICRO-G flight experiment. The required technology and hardware capitalize on previous sensor design, fabrication, and testing and can be flight qualified for a fraction of the cost of an initial spaceflight experiment. Four dynamic load sensors/restraints are envisioned for measurement of astronaut forces and torques. Two standard ISS video cameras record typical astronaut operations and prescribed IVA motions for 3-D kinematics. Forces and kinematics are combined for dynamic analysis of astronaut motion, exploiting the results of the detailed dynamic modeling effort for the quantitative verification of astronaut IVA performance, induced-loads, and adaptive control strategies for crewmember whole-body motion in microgravity. This comprehensive effort, provides an enhanced human factors approach based on physics-based modeling to identify adaptive performance during long-duration spaceflight, which is critically important for astronaut training as well as providing a spaceflight database to drive countermeasure design.

  9. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  10. Engineered materials characterization report, volume 3 - corrosion data and modeling update for viability assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCright, R D

    1998-06-30

    This Engineered Materials Characterization Report (EMCR), Volume 3, discusses in considerable detail the work of the past 18 months on testing the candidate materials proposed for the waste-package (WP) container and on modeling the performance of those materials in the Yucca Mountain (YM) repository setting This report was prepared as an update of information and serves as one of the supporting documents to the Viability Assessment (VA) of the Yucca Mountain Project. Previous versions of the EMCR have provided a history and background of container-materials selection and evaluation (Volume I), a compilation of physical and mechanical properties for the WPmore » design effort (Volume 2), and corrosion-test data and performance-modeling activities (Volume 3). Because the information in Volumes 1 and 2 is still largely current, those volumes are not being revised. As new information becomes available in the testing and modeling efforts, Volume 3 is periodically updated to include that information.« less

  11. Software Engineering Laboratory (SEL) cleanroom process model

    NASA Technical Reports Server (NTRS)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  12. A digital waveguide-based approach for Clavinet modeling and synthesis

    NASA Astrophysics Data System (ADS)

    Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan

    2013-12-01

    The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.

  13. An Analysis of the Impact of Job Search Behaviors on Air Force Company Grade Officer Turnover

    DTIC Science & Technology

    2012-03-01

    pilot tested on Air Force CGOs. Participants were given the definition of passive job search and active job search used in this research effort, and...identifying these different groups and testing the modified model separately within each could yield more accuracy in predicting turnover. This research ...the model the same way. Use of the pseudo R 2 , and the reported statistics and the table design were done in the same manner as previous research

  14. Hypothesis Testing of Edge Organizations: Empirically Calibrating an Organizational Model for Experimentation

    DTIC Science & Technology

    2007-06-01

    LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 63 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT...section discusses previous efforts to model and compare knowledge flows between Edge and Hierarchy organizations. B ackground Individual...and Ebbinghaus, 1913), e.g.: R(t) = at- b where t is time and a and b are scalars (see Figure 2). 4 0 0.5 1 1.5 2 2.5 3 3.5 4 0 5 10 15 Delay in

  15. Objectives and models of the planetary quarantine program

    NASA Technical Reports Server (NTRS)

    Werber, M.

    1975-01-01

    The objectives of the planetary quarantine program are presented and the history of early contamination prevention efforts is outlined. Contamination models which were previously established are given and include: determination of parameters; symbol nomenclature; and calculations of contamination and hazard probabilities. Planetary quarantine is discussed as an issue of national and international concern. Information on international treaty and meetings on spacecraft sterilization, quarantine standards, and policies is provided. The specific contamination probabilities of the U.S.S.R. Venus 3 flyby are included.

  16. Solar Storm GIC Forecasting: Solar Shield Extension Development of the End-User Forecasting System Requirements

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Mahmood, S.; Ngwira, C.; Balch, C.; Lordan, R.; Fugate, D.; Jacobs, W.; Honkonen, I.

    2015-01-01

    A NASA Goddard Space Flight Center Heliophysics Science Division-led team that includes NOAA Space Weather Prediction Center, the Catholic University of America, Electric Power Research Institute (EPRI), and Electric Research and Management, Inc., recently partnered with the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) to better understand the impact of Geomagnetically Induced Currents (GIC) on the electric power industry. This effort builds on a previous NASA-sponsored Applied Sciences Program for predicting GIC, known as Solar Shield. The focus of the new DHS S&T funded effort is to revise and extend the existing Solar Shield system to enhance its forecasting capability and provide tailored, timely, actionable information for electric utility decision makers. To enhance the forecasting capabilities of the new Solar Shield, a key undertaking is to extend the prediction system coverage across Contiguous United States (CONUS), as the previous version was only applicable to high latitudes. The team also leverages the latest enhancements in space weather modeling capacity residing at Community Coordinated Modeling Center to increase the Technological Readiness Level, or Applications Readiness Level of the system http://www.nasa.gov/sites/default/files/files/ExpandedARLDefinitions4813.pdf.

  17. Programming biological models in Python using PySB.

    PubMed

    Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K

    2013-01-01

    Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis.

  18. Programming biological models in Python using PySB

    PubMed Central

    Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K

    2013-01-01

    Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis. PMID:23423320

  19. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  20. Agency attributions of mental effort during self-regulated learning.

    PubMed

    Koriat, Asher

    2018-04-01

    Previous results suggest that the monitoring of one's own performance during self-regulated learning is mediated by self-agency attributions and that these attributions can be influenced by poststudy effort-framing instructions. These results pose a challenge to the study of issues of self-agency in metacognition when the objects of self-regulation are mental operations rather than motor actions that have observable outcomes. When participants studied items in Experiment 1 under time pressure, they invested greater study effort in the easier items in the list. However, the effects of effort framing were the same as when learners typically invest more study effort in the more difficult items: Judgments of learning (JOLs) decreased with effort when instructions biased the attribution of effort to nonagentic sources but increased when they biased attribution to agentic sources. However, the effects of effort framing were constrained by parameters of the study task: Interitem differences in difficulty constrained the attribution of effort to agentic regulation (Experiment 2) whereas interitem differences in the incentive for recall constrained the attribution of effort to nonagentic sources (Experiment 3). The results suggest that the regulation and attribution of effort during self-regulated learning occur within a module that is dissociated from the learner's superordinate agenda but is sensitive to parameters of the task. A model specifies the stage at which effort framing affects the effort-JOL relationship by biasing the attribution of effort to agentic or nonagentic sources. The potentialities that exist in metacognition for the investigation of issues of self-agency are discussed.

  1. Groundwater Pathway Model for the Los Alamos National Laboratory Technical Area 21, Material Disposal Area T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauffer, Philip H.; Levitt, Daniel G.; Miller, Terry Ann

    2017-02-09

    This report consists of four major sections, including this introductory section. Section 2 provides an overview of previous investigations related to the development of the current sitescale model. The methods and data used to develop the 3-D groundwater model and the techniques used to distill that model into a form suitable for use in the GoldSim models are discussed in Section 3. Section 4 presents the results of the model development effort and discusses some of the uncertainties involved. Three attachments that provide details about the components and data used in this groundwater pathway model are also included with thismore » report.« less

  2. Models and theories of prescribing decisions: A review and suggested a new model.

    PubMed

    Murshid, Mohsen Ali; Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the 'persuasion theory - elaboration likelihood model', the stimuli-response marketing model', the 'agency theory', the theory of planned behaviour,' and 'social power theory,' in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research.

  3. Are transnational tobacco companies' market access strategies linked to economic development models? A case study of South Korea.

    PubMed

    Lee, Sungkyu; Holden, Chris; Lee, Kelley

    2013-01-01

    Transnational tobacco companies (TTCs) have used varied strategies to access previously closed markets. Using TTCs' efforts to enter the South Korean market from the late 1980s as a case study, this article asks whether there are common patterns in these strategies that relate to the broader economic development models adopted by targeted countries. An analytical review of the existing literature on TTCs' efforts to access emerging markets was conducted to develop hypotheses relating TTCs' strategies to countries' economic development models. A case study of Korea was then undertaken based on analysis of internal tobacco industry documents. Findings were consistent with the hypothesis that TTCs' strategies in Korea were linked to Korea's export-oriented economic development model and its hostile attitude towards foreign investment. A fuller understanding of TTCs' strategies for expansion globally can be derived by locating them within the economic development models of specific countries or regions. Of foremost importance is the need for governments to carefully balance economic and public health policies when considering liberalisation.

  4. Are transnational tobacco companies’ market access strategies linked to economic development models? A case study of South Korea

    PubMed Central

    Lee, Sungkyu; Holden, Chris; Lee, Kelley

    2013-01-01

    Transnational tobacco companies (TTCs) have used varied strategies to access previously closed markets. Using TTCs’ efforts to enter the South Korean market from the late 1980s as a case study, this article asks whether there are common patterns in these strategies that relate to the broader economic development models adopted by targeted countries. An analytical review of the existing literature on TTCs’ efforts to access emerging markets was conducted to develop hypotheses relating TTCs’ strategies to countries’ economic development models. A case study of Korea was then undertaken based on analysis of internal tobacco industry documents. Findings were consistent with the hypothesis that TTCs’ strategies in Korea were linked to Korea’s export-oriented economic development model and its hostile attitude toward foreign investment. A fuller understanding of TTCs’ strategies for expansion globally can be derived by locating them within the economic development models of specific countries or regions. Of foremost importance is the need for governments to carefully balance economic and public health policies when considering liberalisation. PMID:23327486

  5. Sperm competition games when males invest in paternal care.

    PubMed

    Requena, Gustavo S; Alonzo, Suzanne H

    2017-08-16

    Sperm competition games investigate how males partition limited resources between pre- and post-copulatory competition. Although extensive research has explored how various aspects of mating systems affect this allocation, male allocation between mating, fertilization and parental effort has not previously been considered. Yet, paternal care can be energetically expensive and males are generally predicted to adjust their parental effort in response to expected paternity. Here, we incorporate parental effort into sperm competition games, particularly exploring how the relationship between paternal care and offspring survival affects sperm competition and the relationship between paternity and paternal care. Our results support existing expectations that (i) fertilization effort should increase with female promiscuity and (ii) paternal care should increase with expected paternity. However, our analyses also reveal that the cost of male care can drive the strength of these patterns. When paternal behaviour is energetically costly, increased allocation to parental effort constrains allocation to fertilization effort. As paternal care becomes less costly, the association between paternity and paternal care weakens and may even be absent. By explicitly considering variation in sperm competition and the cost of male care, our model provides an integrative framework for predicting the interaction between paternal care and patterns of paternity. © 2017 The Author(s).

  6. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  7. The Mediating Role of Self-Exertion on the Effects of Effort on Learning Virtues and Emotional Distress in Academic Failure in a Confucian Context

    PubMed Central

    Fwu, Bih-Jen; Chen, Shun-Wen; Wei, Chih-Fen; Wang, Hsiou-Huai

    2017-01-01

    Previous studies have found that in East Asian Confucian societies, hardworking students are often trapped in a dilemma of enjoying a positive moral image while suffering from emotional distress due to academic failure. This study intends to further explore whether the cultural-specific belief in self-exertion acts as a psychological mechanism to lessen these students’ negative emotions. A group of 288 college students in Taiwan were administered a questionnaire to record their responses to past academic failures. The results from structural equation modeling showed that self-exertion functioned as a mediator between the effects of effort on learning virtues and emotional distress. Self-exertion to fulfill one’s duty to oneself positively mediated the effect of effort on learning virtues, whereas self-exertion to fulfill one’s duty to one’s parents negatively mediated the effect of effort on emotional distress. Theoretical and cultural implications are further discussed. PMID:28119648

  8. Changing the Hidden Curriculum of Campus Rape Prevention and Education: Women's Self-Defense as a Key Protective Factor for a Public Health Model of Prevention.

    PubMed

    McCaughey, Martha; Cermele, Jill

    2015-10-16

    Recent activist, policy, and government efforts to engage in campus rape prevention education (RPE), culminating in the 2014 White House Task Force recommendations to combat campus sexual assault, prompt a need to examine the concept of "prevention" in the context of sexual assault on U.S. college campuses and their surrounding community service agencies. This article reviews previous research on effective resistance to sexual assault, showing that self-defense is a well-established protective factor in a public health model of sexual assault prevention. The article goes on to show, through an examination of campus rape prevention efforts framed as "primary prevention," that self-defense is routinely excluded. This creates a hidden curriculum that preserves a gender status quo even while it strives for change. The article concludes with recommendations for how administrators, educators, facilitators, funding agencies, and others can incorporate self-defense into campus RPE for a more effective, data-driven set of sexual assault prevention efforts. © The Author(s) 2015.

  9. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  10. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  11. Predictors of physicians' attitudes toward sharing information with patients and addressing psychosocial needs: a cross-sectional study in Greece.

    PubMed

    Tsimtsiou, Zoi; Benos, Alexios; Garyfallos, Alexandros A; Hatzichristou, Dimitrios

    2012-01-01

    Sharing information with patients and addressing their psychosocial needs are recognized as fundamental practices of patient-centered physicians. Our study explored predictors of physicians' patient-centered attitudes and yielded a better understanding of the relative influences of job satisfaction, employment status, specialty, previous communication skills training, and sociodemographic factors. Physicians who participated in 13 identical workshops offered throughout Greece were invited to complete a battery of anonymous questionnaires (demographics, job satisfaction scale, Patient-Practitioner Orientation Scale-Sharing subscale, and Physician Belief Scale). Prediction models were used to identify predictors of patient-centered attitudes. In total, 400 fully completed questionnaires were returned (response rate 79.8%). Job satisfaction, previous training in communication skills, younger age and lower socioeconomic status were predictors of positive attitudes toward sharing information with patients. Job satisfaction, previous training in communication skills, and stronger religious beliefs were predictors of higher psychosocial orientation. Job satisfaction and training in communication skills should be ensured in the effort to develop and maintain patient-centered attitudes in physicians. Religious beliefs, age, and socioeconomic status should be taken into consideration in the effort to help physicians become aware of their biases.

  12. Summary Analysis: Hanford Site Composite Analysis Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, W. E.; Lehman, L. L.

    2017-06-05

    The Hanford Site’s currently maintained Composite Analysis, originally completed in 1998, requires an update. A previous update effort was undertaken by the U.S. Department of Energy (DOE) in 2001-2005, but was ended before completion to allow the Tank Closure & Waste Management Environmental Impact Statement (TC&WM EIS) (DOE/EIS-0391) to be prepared without potential for conflicting sitewide models. This EIS was issued in 2012, and the deferral was ended with guidance in memorandum “Modeling to Support Regulatory Decision Making at Hanford” (Williams, 2012) provided with the aim of ensuring subsequent modeling is consistent with the EIS.

  13. Statistical and Conceptual Model Testing Geomorphic Principles through Quantification in the Middle Rio Grande River, NM.

    NASA Astrophysics Data System (ADS)

    Posner, A. J.

    2017-12-01

    The Middle Rio Grande River (MRG) traverses New Mexico from Cochiti to Elephant Butte reservoirs. Since the 1100s, cultivating and inhabiting the valley of this alluvial river has required various river training works. The mid-20th century saw a concerted effort to tame the river through channelization, Jetty Jacks, and dam construction. A challenge for river managers is to better understand the interactions between a river training works, dam construction, and the geomorphic adjustments of a desert river driven by spring snowmelt and summer thunderstorms carrying water and large sediment inputs from upstream and ephemeral tributaries. Due to its importance to the region, a vast wealth of data exists for conditions along the MRG. The investigation presented herein builds upon previous efforts by combining hydraulic model results, digitized planforms, and stream gage records in various statistical and conceptual models in order to test our understanding of this complex system. Spatially continuous variables were clipped by a set of river cross section data that is collected at decadal intervals since the early 1960s, creating a spatially homogenous database upon which various statistical testing was implemented. Conceptual models relate forcing variables and response variables to estimate river planform changes. The developed database, represents a unique opportunity to quantify and test geomorphic conceptual models in the unique characteristics of the MRG. The results of this investigation provides a spatially distributed characterization of planform variable changes, permitting managers to predict planform at a much higher resolution than previously available, and a better understanding of the relationship between flow regime and planform changes such as changes to longitudinal slope, sinuosity, and width. Lastly, data analysis and model interpretation led to the development of a new conceptual model for the impact of ephemeral tributaries in alluvial rivers.

  14. A Study of the Interaction of Millimeter Wave Fields with Biological Systems.

    DTIC Science & Technology

    1984-07-01

    structurally complex proteins . The third issue is the relevance of the parameters used in previous modeling efforts. The strength of the exciton-phonon...modes of proteins in the millimeter and submillimeter regions of the electromagnetic spectrum. Specifically: o " Four separate groups of frequencies...Rhodopseudomonas Sphaeroides (4). In industrial or military environments a significant number of personnel are exposed to electromagnetic fields

  15. Dropping vs. Restarting: A Dynamic Analysis of Two Newspaper Subscribing Behaviors.

    ERIC Educational Resources Information Center

    Zhu, Jian-Hua

    In an effort to help describe and explain why people do not read and subscribe to newspapers, a study built on previous research by adding two new contributions: (1) reliance on a four-wave panel data-set rather than on a one-shot survey; and (2) use of a dynamic modeling procedure rather than cross-sectional analysis. The problem with previous…

  16. A Reply to the Comment on "Assessing Discrepancies Between Previous Plate Kinematic Models of Mesozoic Iberia and Their Constraints" by Barnett-Moore Et Al.

    NASA Astrophysics Data System (ADS)

    Barnett-Moore, N.; Font, E.; Neres, M.

    2017-12-01

    We welcome the comments of van Hinsbergen et al. (2017) on the recent efforts of Barnett-Moore et al. (2016). Specifically, van Hinsbergen et al. (2017) raise concerns about two of the major conclusions made by Barnett-Moore et al. (2016). First, Barnett-Moore et al. (2016) choose to negate the Cretaceous Iberian paleomagnetic database as a viable plate kinematic constraint on the plate motions of Mesozoic Iberia. This conclusion, criticized by van Hinsbergen et al. (2017), was based on citing the previous efforts of Neres et al. (2012, 2013), which exposed several shortcomings, elaborated on below, within this data set. Second, van Hinsbergen et al. (2017) criticize Barnett-Moore et al. (2016) for dismissing mantle tomographic interpretations in support of a preserved Cretaceous Pyrenean "subducted slab" beneath northern Africa. Below, we have addressed each of these major criticisms from van Hinsbergen et al. (2017) in a two-section layout, similar to their comment above.

  17. Aqueous chloride stress corrosion cracking of titanium: A comparison with environmental hydrogen embrittlement

    NASA Technical Reports Server (NTRS)

    Nelson, H. G.

    1973-01-01

    The physical characteristics of stress corrosion cracking of titanium in an aqueous chloride environment are compared with those of embrittlement of titanium by a gaseous hydrogen environment in an effort to help contribute to the understanding of the possible role of hydrogen in the complex stress corrosion cracking process. Based on previous studies, the two forms of embrittlement are shown to be similar at low hydrogen pressures (100 N/sqm) but dissimilar at higher hydrogen pressures. In an effort to quantify this comparison, tests were conducted in an aqueous chloride solution using the same material and test techniques as had previously been employed in a gaseous hydrogen environment. The results of these tests strongly support models based on hydrogen as the embrittling species in an aqueous chloride environment. Further, it is shown that if hydrogen is the causal species, the effective hydrogen fugacity at the surface of titanium exposed to an aqueous chloride environment is equivalent to a molecular hydrogen pressure of approximately 10 N/sqm.

  18. Recent Developments in Toxico-Cheminformatics: A New ...

    EPA Pesticide Factsheets

    Efforts to improve public access to chemical toxicity information resources, coupled with new high-throughput screening (HTS) data and efforts to systematize legacy toxicity studies, have the potential to significantly improve predictive capabilities in toxicology. Important recent developments include: 1) large and growing public resources that link chemical structures to biological activity and toxicity data in searchable format, and that offer more nuanced and varied representations of activity; 2) standardized relational data models that capture relevant details of chemical treatment and effects of published in vivo experiments; and 3) the generation of large amounts of new data from public efforts that are employing HTS technologies to probe a wide range of bioactivity and cellular processes across large swaths of chemical space. Most recently, EPA’s DSSTox project has published several new EPA chemical data inventories (IRIS, HPV, ToxCast) and added an on-line capability for structure (substructure or similarity)-searching through all or parts of the published DSSTox data files. These efforts are, for the first time in many cases, opening up a structure-paved two-way highway between previously inaccessible or isolated public chemical data repositories and large public resources, such as PubChem. In addition, public initiatives (such as ToxML) are developing systematized data models of toxicity study areas, and introducing standardized templates, contr

  19. User's Manual for LEWICE Version 3.2

    NASA Technical Reports Server (NTRS)

    Wright, William

    2008-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 3.2 of this software, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications, the addition of automated Navier-Stokes analysis, an empirical model for supercooled large droplets (SLD) and a pneumatic boot option. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this software.

  20. Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.

    The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less

  1. A small, single stage orifice pulse tube cryocooler demonstration

    NASA Technical Reports Server (NTRS)

    Hendricks, John B.

    1990-01-01

    This final report summarizes and presents the analytical and experimental progress in the present effort. The principal objective of this effort was the demonstration of a 0.25 Watt, 80 Kelvin orifice pulse tube refrigerator. The experimental apparatus is described. The design of a partially optimized pulse tube refrigerator is included. The refrigerator demonstrates an ultimate temperature of 77 K, has a projected cooling power of 0.18 Watts at 80 K, and has a measured cooling power of 1 Watt at 97 K, with an electrical efficiency of 250 Watts/Watt, much better than previous pulse tube refrigerators. A model of the pulse tube refrigerator that provides estimates of pressure ratio and mass flow within the pulse tube refrigerator, based on component physical characteristics is included. A model of a pulse tube operation based on generalized analysis which is adequate to support local optimization of existing designs is included. A model of regenerator performance based on an analogy to counterflow heat exchangers is included.

  2. An adaptive strategy for reducing Feral Cat predation on endangered hawaiian birds

    USGS Publications Warehouse

    Hess, S.C.; Banko, P.C.; Hansen, H.

    2009-01-01

    Despite the long history of Feral Cats Felis catus in Hawai'i, there has been little research to provide strategies to improve control programmes and reduce depredation on endangered species. Our objective Was to develop a predictive model to determine how landscape features on Mauna Kea, such as habitat, elevation, and proximity to roads, may affect the number of Feral Cats captured at each trap. We used log-link generalized linear models and QAIC c model ranking criteria to determine the effect of these factors. We found that The number of cats captured per trap Was related to effort, habitat type, and Whether traps Were located on The West or North Slope of Mauna Kea. We recommend an adaptive management strategy to minimize trapping interference by non-target Small Indian Mongoose Herpestes auropunctatus with toxicants, to focus trapping efforts in M??mane Sophora chrysophylla habitat on the West slope of Mauna Kea, and to cluster traps near others that have previously captured multiple cats.

  3. Prediction of Liquid Slosh Damping Using a High Resolution CFD Tool

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Purandare, Ravi; Peugeot, John; West, Jeff

    2012-01-01

    Propellant slosh is a potential source of disturbance critical to the stability of space vehicles. The slosh dynamics are typically represented by a mechanical model of a spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control analysis. Our previous effort has demonstrated the soundness of a CFD approach in modeling the detailed fluid dynamics of tank slosh and the excellent accuracy in extracting mechanical properties (slosh natural frequency, slosh mass, and slosh mass center coordinates). For a practical partially-filled smooth wall propellant tank with a diameter of 1 meter, the damping ratio is as low as 0.0005 (or 0.05%). To accurately predict this very low damping value is a challenge for any CFD tool, as one must resolve a thin boundary layer near the wall and must minimize numerical damping. This work extends our previous effort to extract this challenging parameter from first principles: slosh damping for smooth wall and for ring baffle. First the experimental data correlated into the industry standard for smooth wall were used as the baseline validation. It is demonstrated that with proper grid resolution, CFD can indeed accurately predict low damping values from smooth walls for different tank sizes. The damping due to ring baffles at different depths from the free surface and for different sizes of tank was then simulated, and fairly good agreement with experimental correlation was observed. The study demonstrates that CFD technology can be applied to the design of future propellant tanks with complex configurations and with smooth walls or multiple baffles, where previous experimental data is not available.

  4. Prefrontal Cortical Inactivations Decrease Willingness to Expend Cognitive Effort on a Rodent Cost/Benefit Decision-Making Task.

    PubMed

    Hosking, Jay G; Cocker, Paul J; Winstanley, Catharine A

    2016-04-01

    Personal success often necessitates expending greater effort for greater reward but, equally important, also requires judicious use of our limited cognitive resources (e.g., attention). Previous animal models have shown that the prelimbic (PL) and infralimbic (IL) regions of the prefrontal cortex (PFC) are not involved in (physical) effort-based choice, whereas human studies have demonstrated PFC contributions to (mental) effort. Here, we utilize the rat Cognitive Effort Task (rCET) to probe PFC's role in effort-based decision making. In the rCET, animals can choose either an easy trial, where the attentional demand is low but the reward (sugar) is small or a difficult trial on which both the attentional demand and reward are greater. Temporary inactivation of PL and IL decreased all animals' willingness to expend mental effort and increased animals' distractibility; PL inactivations more substantially affected performance (i.e., attention), whereas IL inactivations increased motor impulsivity. These data imply that the PFC contributes to attentional resources, and when these resources are diminished, animals shift their choice (via other brain regions) accordingly. Thus, one novel therapeutic approach to deficits in effort expenditure may be to focus on the resources that such decision making requires, rather than the decision-making process per se. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. The evolution of life-history variation in fishes, with particular reference to flatfishes

    NASA Astrophysics Data System (ADS)

    Roff, Derek A.

    This paper explores four aspects of the evolution of life-history variation in fish, with particular reference to the flatfishes: 1. genetic variation and evolutionary response; 2. the size and age at first reproduction; 3. adult lifespan and variation in recruitment; 4. the relationship between reproductive effort and age. Evolutionary response may be limited by previous evolutionary pathways (phylogenetic variation) or by lack of genetic variation due to selection for a single trait. Estimates of heritability suggest, as predicted, that selection is stronger on life-history traits than morphological traits; but there is still adequate genetic variation to permit fairly rapid evolutionary changes. Several approaches to the analysis of the optimal age and size at first reproduction are discussed in the light of a general life-history model based on the assumption that natural selection maximizes r or R 0. It is concluded that one of the most important areas of future research is the relationship between reproduction and mortality. Murphy's hypothesis that the reproductive lifespan should increase with variation in spawning success is shown to be incorrect for fish, at least at the level of interspecific comparison. The model of Charlesworth & León predicting the sufficient condition for reproductive effort to increase with age is tested: in 28 of 31 cases the model predicts an increase of reproductive effort with age. These results suggest that, in general, reproductive effort should increase with age in fish. This prediction is confirmed in the 15 species for which adequate data exist.

  6. Analysis of aircraft longitudinal handling qualities

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  7. An analytical approach for predicting pilot induced oscillations

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  8. Higher Order Chemistry Models in the CFD Simulation of Laser-Ablated Carbon Plumes

    NASA Technical Reports Server (NTRS)

    Greendyke, R. B.; Creel, J. R.; Payne, B. T.; Scott, C. D.

    2005-01-01

    Production of single-walled carbon nanotubes (SWNT) has taken place for a number of years and by a variety of methods such as laser ablation, chemical vapor deposition, and arc-jet ablation. Yet, little is actually understood about the exact chemical kinetics and processes that occur in SWNT formation. In recent time, NASA Johnson Space Center has devoted a considerable effort to the experimental evaluation of the laser ablation production process for SWNT originally developed at Rice University. To fully understand the nature of the laser ablation process it is necessary to understand the development of the carbon plume dynamics within the laser ablation oven. The present work is a continuation of previous studies into the efforts to model plume dynamics using computational fluid dynamics (CFD). The ultimate goal of the work is to improve understanding of the laser ablation process, and through that improved understanding, refine the laser ablation production of SWNT.

  9. Enthalpy measurement of coal-derived liquids. Technical progress report, August-October 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidnay, A.J.; Yesavage, V.F.

    The correlational effort on the coal syncrudes and model compounds has been proceeding along two fronts. The first involves experimental work on a correlating factor for association in the liquids and the second involves an investigation of the modeling capabilities of cubic equations of state. The first area of investigation is the experimental measurement of a correlating factor for assocition in coal liquids. The procedure involves molecular weight measurement by freezing point depression. To facilitate these measurements, a simple Beckman freezing point depression apparatus is being currently modified to increase the accuracy, speed, and ease of measurement. The second areamore » of effort has involved establishing a set of cubic equations of state which can adequately model the enthalpy departures of quinoline and m-cresol. To this effort, a number of standard and association specific equations of state have been tested against a data base of previously measured enthalpy departures of m-cresol and quinoline. It has been found that these equations do quantitatively a poor job on m-cresol and quinoline. These problems are probably due to the highly polar nature of m-cresol and to a lesser extent quinoline, and to the poor quality of critical parameters for quinoline.« less

  10. Developing recreational harvest regulations for an unexploited lake trout population

    USGS Publications Warehouse

    Lenker, Melissa A; Weidel, Brian C.; Jensen, Olaf P.; Solomon, Christopher T.

    2016-01-01

    Developing fishing regulations for previously unexploited populations presents numerous challenges, many of which stem from a scarcity of baseline information about abundance, population productivity, and expected angling pressure. We used simulation models to test the effect of six management strategies (catch and release; trophy, minimum, and maximum length limits; and protected and exploited slot length limits) on an unexploited population of Lake Trout Salvelinus namaycush in Follensby Pond, a 393-ha lake located in New York State’s Adirondack Park. We combined field and literature data and mark–recapture abundance estimates to parameterize an age-structured population model and used the model to assess the effects of each management strategy on abundance, catch per unit effort (CPUE), and harvest over a range of angler effort (0–2,000 angler-days/year). Lake Trout density (3.5 fish/ha for fish ≥ age 13, the estimated age at maturity) was similar to densities observed in other unexploited systems, but growth rate was relatively slow. Maximum harvest occurred at levels of effort ≤ 1,000 angler-days/year in all the scenarios considered. Regulations that permitted harvest of large postmaturation fish, such as New York’s standard Lake Trout minimum size limit or a trophy size limit, resulted in low harvest and high angler CPUE. Regulations that permitted harvest of small and sometimes immature fish, such as a protected slot or maximum size limit, allowed high harvest but resulted in low angler CPUE and produced rapid declines in harvest with increases in effort beyond the effort consistent with maximum yield. Management agencies can use these results to match regulations to management goals and to assess the risks of different management options for unexploited Lake Trout populations and other fish species with similar life history traits.

  11. CFD Analysis of Emissions for a Candidate N+3 Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud

    2015-01-01

    An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spray-modeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.

  12. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model.

    PubMed

    Winslow, Brent D; Nguyen, Nam; Venta, Kimberly E

    2017-01-01

    Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  13. Enabling Software Acquisition Improvement: Government and Industry Software Development Team Acquisition Model

    DTIC Science & Technology

    2010-04-30

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this

  14. An artificial Kepler dichotomy? Implications for the coplanarity of planetary systems

    NASA Astrophysics Data System (ADS)

    Bovaird, Timothy; Lineweaver, Charles H.

    2016-10-01

    We challenge the assumptions present in previous efforts to model the ensemble of detected Kepler systems, which require a dichotomous stellar population of `fertile' and `sterile' planet producing stars. We remove the assumption of Rayleigh distributed mutual inclinations between planets and show that the need for two distinct stellar populations disappears when the inner part of planetary disks are assumed to be flat, rather than flared.

  15. Steganalysis Techniques for Documents and Images

    DTIC Science & Technology

    2005-05-01

    steganography . We then illustrated the efficacy of our model using variations of LSB steganography . For binary images , we have made significant progress in...efforts have focused on two areas. The first area is LSB steganalysis for grayscale images . Here, as we had proposed (as a challenging task), we have...generalized our previous steganalysis technique of sample pair analysis to a theoretical framework for the detection of the LSB steganography . The new

  16. Fungal community structure under goat willows (Salix caprea L.) growing at metal polluted site: the potential of screening in a model phytostabilisation study

    Treesearch

    Marjana Regvar; Matevz Likar; Andrej Piltaver; Nives Kugonic; Jane E. Smith

    2010-01-01

    Goat willow (Salix caprea L.) was selected in a previous vegetation screening study as a potential candidate for the later-stage phytostabilisation efforts at a heavily metal polluted site in Slovenia. The aims of this study were to identify the fungi colonising roots of S. caprea along the gradient of vegetation succession and...

  17. Broadband Scattering from Sand and Sand/Mud Sediments with Extensive Environmental Characterization

    DTIC Science & Technology

    2017-01-30

    experiment , extensive envi- ronmental characterization was also performed to support data/model comparisons for both experimental efforts. The site...mechanisms, potentially addressing questions left unresolved from the previous sediment acoustics experiments , SAX99 and SAX04. This work was also to provide...environmental characterization to support the analysis of data collected during the Target and Reverberation Experiment in 2013 (TREX13) as well as

  18. Reduced Delay of Gratification and Effortful Control among Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Faja, Susan; Dawson, Geraldine

    2015-01-01

    We explored internal control of behavior using direct observation and parent report. Previous research has found that both the delay of gratification task and parent-reported effortful control predict later social ability and more positive outcomes in typically developing children. Children with autism spectrum disorder have previously been…

  19. The Fundamental Physical Processes Producing and Controlling Stellar Coronal/Transition Region/Chromospheric Activity and Structure

    NASA Technical Reports Server (NTRS)

    Ayres, T. R.; Brown, A.

    2000-01-01

    Our LTSA (Long Term Space Astrophysics) research has utilized current NASA and ESA spacecraft, supporting ground-based IR, radio, and sub-mm telescopes, and the extensive archives of HST (Hubble Space Telescope), IUE (International Ultraviolet Explorer), ROSAT, EUVE (Extreme Ultraviolet Explorer), and other missions. Our research effort has included observational work (with a nonnegligible groundbased component), specialized processing techniques for imaging and spectral data, and semiempirical modelling, ranging from optically thin emission measure studies to simulations of optically thick resonance lines. In our previous LTSA efforts, we have had a number of major successes, including most recently: organizing and carrying out an extensive cool star UV survey in HST cycle eight; obtaining observing time with new instruments, such as Chandra and XMM (X-ray Multi-Mirror) in their first cycles; collaborating with the Chandra GTO program and participating with the Chandra Emission Line Project on multi-wavelength observations of HR 1099 and Capella. These are the main broad-brush themes of our previous investigation: a) Where do Coronae Occur in the Hertzsprung-Russell Diagram? b) Winds of Coronal and Noncoronal Stars; c) Activity, Age, Rotation Relations; d) Atmospheric Inhomogeneities; e) Heating Mechanisms, Subcoronal Flows, and Flares; f) Development of Analysis and Modelling Tools.

  20. Use of Remote Sensing and Dust Modelling to Evaluate Ecosystem Phenology and Pollen Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.; Sprigg, William A.; Watts, Carol; Shaw, Patrick

    2007-01-01

    The impact of pollen release and downwind concentrations can be evaluated utilizing remote sensing. Previous NASA studies have addressed airborne dust prediction systems PHAiRS (Public Health Applications in Remote Sensing) which have determined that pollen forecasts and simulations are possible. By adapting the deterministic dust model (as an in-line system with the National Weather Service operational forecast model) used in PHAiRS to simulate downwind dispersal of pollen, initializing the model with pollen source regions from MODIS, assessing the results a rapid prototype concept can be produced. We will present the results of our effort to develop a deterministic model for predicting and simulating pollen emission and downwind concentration to study details or phenology and meteorology and their dependencies, and the promise of a credible real time forecast system to support public health and agricultural science and service. Previous studies have been done with PHAiRS research, the use of NASA data, the dust model and the PHAiRS potential to improve public health and environmental services long into the future.

  1. The Chancellor's Model School Project (CMSP)

    NASA Technical Reports Server (NTRS)

    Lopez, Gil

    1999-01-01

    What does it take to create and implement a 7th to 8th grade middle school program where the great majority of students achieve at high academic levels regardless of their previous elementary school backgrounds? This was the major question that guided the research and development of a 7-year long project effort entitled the Chancellor's Model School Project (CMSP) from September 1991 to August 1998. The CMSP effort conducted largely in two New York City public schools was aimed at creating and testing a prototype 7th and 8th grade model program that was organized and test-implemented in two distinct project phases: Phase I of the CMSP effort was conducted from 1991 to 1995 as a 7th to 8th grade extension of an existing K-6 elementary school, and Phase II was conducted from 1995 to 1998 as a 7th to 8th grade middle school program that became an integral part of a newly established 7-12th grade high school. In Phase I, the CMSP demonstrated that with a highly structured curriculum coupled with strong academic support and increased learning time, students participating in the CMSP were able to develop a strong foundation for rigorous high school coursework within the space of 2 years (at the 7th and 8th grades). Mathematics and Reading test score data during Phase I of the project, clearly indicated that significant academic gains were obtained by almost all students -- at both the high and low ends of the spectrum -- regardless of their previous academic performance in the K-6 elementary school experience. The CMSP effort expanded in Phase II to include a fully operating 7-12 high school model. Achievement gains at the 7th and 8th grade levels in Phase II were tempered by the fact that incoming 7th grade students' academic background at the CMSP High School was significantly lower than students participating in Phase 1. Student performance in Phase II was also affected by the broadening of the CMSP effort from a 7-8th grade program to a fully functioning 7-12 high school which as a consequence lessened the focus and structure available to the 7-8th grade students and teachers -- as compared to Phase I. Nevertheless, the CMSP does represent a unique curriculum model for 7th and 8th grade students in urban middle schools. Experience in both Phase I and Phase II of the project allowed the CMSP to be developed and tested along the broad range of parameters and characteristics that embody an operating public school in an urban environment.

  2. A revised model of fluid transport optimization in Physarum polycephalum.

    PubMed

    Bonifaci, Vincenzo

    2017-02-01

    Optimization of fluid transport in the slime mold Physarum polycephalum has been the subject of several modeling efforts in recent literature. Existing models assume that the tube adaptation mechanism in P. polycephalum's tubular network is controlled by the sheer amount of fluid flow through the tubes. We put forward the hypothesis that the controlling variable may instead be the flow's pressure gradient along the tube. We carry out the stability analysis of such a revised mathematical model for a parallel-edge network, proving that the revised model supports the global flow-optimizing behavior of the slime mold for a substantially wider class of response functions compared to previous models. Simulations also suggest that the same conclusion may be valid for arbitrary network topologies.

  3. Do Effort and Reward at Work Predict Changes in Cognitive Function? First Longitudinal Results from the Representative German Socio-Economic Panel.

    PubMed

    Riedel, Natalie; Siegrist, Johannes; Wege, Natalia; Loerbroks, Adrian; Angerer, Peter; Li, Jian

    2017-11-15

    It has been suggested that work characteristics, such as mental demands, job control, and occupational complexity, are prospectively related to cognitive function. However, current evidence on links between psychosocial working conditions and cognitive change over time is inconsistent. In this study, we applied the effort-reward imbalance model that allows to build on previous research on mental demands and to introduce reward-based learning as a principle with beneficial effect on cognitive function. We aimed to investigate whether high effort, high reward, and low over-commitment in 2006 were associated with positive changes in cognitive function in terms of perceptual speed and word fluency (2006-2012), and whether the co-manifestation of high effort and high reward would yield the strongest association. To this end, we used data on 1031 employees who participated in a large and representative study. Multivariate linear regression analyses supported our main hypotheses (separate and combined effects of effort and reward), particularly on changes in perceptual speed, whereas the effects of over-commitment did not reach the level of statistical significance. Our findings extend available knowledge by examining the course of cognitive function over time. If corroborated by further evidence, organization-based measures in the workplace can enrich efforts towards preventing cognitive decline in ageing workforces.

  4. High-School Teachers’ Beliefs about Effort and Their Attitudes toward Struggling and Smart Students in a Confucian Society

    PubMed Central

    Chen, Shun-Wen; Fwu, Bih-Jen; Wei, Chih-Fen; Wang, Hsiou-Huai

    2016-01-01

    Previous studies conducted in Western societies showed that instructors’ beliefs about intellectual ability affected their attitudes toward students. However, in many East Asian societies influenced by Confucian culture, teachers not only hold beliefs of ability but also two kinds of beliefs about effort: obligation-oriented belief (i.e., believing that effort-making is a student’s role obligation) and improvement-oriented belief (i.e., believing that effort can conquer the limitations of one’s ability). This study aimed to investigate the relationships between teachers’ effort beliefs and their attitudes toward favoritism, praise, and expectations toward struggling and smart students. The participants were 151 Taiwanese high-school teachers. Results of Structure Equation Modeling showed that (1) teachers’ obligation-oriented belief about effort was positively correlated with their favoritism, praise, short-term and long-term expectations of struggling students, but negatively correlated with their favoritism and praise of smart students, (2) teachers’ improvement-orientated belief about effort was negatively correlated with their short-term expectation of smart students and favoritism of struggling students, but positively correlated with their praise of smart students, and (3) the entity theory of intelligence was negatively correlated with favoritism and praise of struggling students, but positively correlated with favoritism of smart students. The theoretical and cultural implications are discussed. PMID:27683565

  5. "Hot Spots" of Land Atmosphere Coupling

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Dirmeyer, Paul A.; Guo, Zhi-Chang; Bonan, Gordan; Chan, Edmond; Cox, Peter; Gordon, T. C.; Kanae, Shinjiro; Kowalczyk, Eva; Lawrence, David

    2004-01-01

    Previous estimates of land-atmosphere interaction (the impact of soil moisture on precipitation) have been limited by a severe paucity of relevant observational data and by the model-dependence of the various computational estimates. To counter this limitation, a dozen climate modeling groups have recently performed the same highly-controlled numerical experiment as part of a coordinated intercomparison project. This allows, for the first time ever, a superior multi-model approach to the estimation of the regions on the globe where precipitation is affected by soil moisture anomalies during Northern Hemisphere summer. Such estimation has many potential benefits; it can contribute, for example, to seasonal rainfall prediction efforts.

  6. Blur Clarified: A review and Synthesis of Blur Discrimination

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J.

    2011-01-01

    Blur is an important attribute of human spatial vision, and sensitivity to blur has been the subject of considerable experimental research and theoretical modeling. Often these models have invoked specialized concepts or mechanisms, such as intrinsic blur, multiple channels, or blur estimation units. In this paper we review the several experimental studies of blur discrimination and find they are in broad empirical agreement. But contrary to previous modeling efforts, we find that the essential features of blur discrimination are fully accounted for by a visible contrast energy model (ViCE), in which two spatial patterns are distinguished when the integrated difference between their masked local contrast energy responses reaches a threshold value.

  7. LEWICE 2.2 Capabilities and Thermal Validation

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2002-01-01

    A computational model of bleed air anti-icing and electrothermal de-icing have been added to the LEWICE 2.0 software by integrating the capabilities of two previous programs, ANTICE and LEWICE/ Thermal. This combined model has been released as LEWICE version 2.2. Several advancements have also been added to the previous capabilities of each module. This report will present the capabilities of the software package and provide results for both bleed air and electrothermal cases. A comprehensive validation effort has also been performed to compare the predictions to an existing electrothermal database. A quantitative comparison shows that for deicing cases, the average difference is 9.4 F (26%) compared to 3 F for the experimental data while for evaporative cases the average difference is 2 F (32%) compared to an experimental error of 4 F.

  8. FPA Depot - Web Application

    NASA Technical Reports Server (NTRS)

    Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam

    2011-01-01

    Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.

  9. Modeling the Water - Quality Effects of Changes to the Klamath River Upstream of Keno Dam, Oregon

    USGS Publications Warehouse

    Sullivan, Annett B.; Sogutlugil, I. Ertugrul; Rounds, Stewart A.; Deas, Michael L.

    2013-01-01

    The Link River to Keno Dam (Link-Keno) reach of the Klamath River, Oregon, generally has periods of water-quality impairment during summer, including low dissolved oxygen, elevated concentrations of ammonia and algae, and high pH. Efforts are underway to improve water quality in this reach through a Total Maximum Daily Load (TMDL) program and other management and operational actions. To assist in planning, a hydrodynamic and water-quality model was used in this study to provide insight about how various actions could affect water quality in the reach. These model scenarios used a previously developed and calibrated CE-QUAL-W2 model of the Link-Keno reach developed by the U.S. Geological Survey (USGS), Watercourse Engineering Inc., and the Bureau of Reclamation for calendar years 2006-09 (referred to as the "USGS model" in this report). Another model of the same river reach was previously developed by Tetra Tech, Inc. and the Oregon Department of Environmental Quality for years 2000 and 2002 and was used in the TMDL process; that model is referred to as the "TMDL model" in this report. This report includes scenarios that (1) assess the effect of TMDL allocations on water quality, (2) provide insight on certain aspects of the TMDL model, (3) assess various methods to improve water quality in this reach, and (4) examine possible water-quality effects of a future warmer climate. Results presented in this report for the first 5 scenarios supersede or augment those that were previously published (scenarios 1 and 2 in Sullivan and others [2011], 3 through 5 in Sullivan and others [2012]); those previous results are still valid, but the results for those scenarios in this report are more current.

  10. A two-dimensional vibration analysis of piezoelectrically actuated microbeam with nonideal boundary conditions

    NASA Astrophysics Data System (ADS)

    Rezaei, M. P.; Zamanian, M.

    2017-01-01

    In this paper, the influences of nonideal boundary conditions (due to flexibility) on the primary resonant behavior of a piezoelectrically actuated microbeam have been studied, for the first time. The structure has been assumed to treat as an Euler-Bernoulli beam, considering the effects of geometric nonlinearity. In this work, the general nonideal supports have been modeled as a the combination of horizontal, vertical and rotational springs, simultaneously. Allocating particular values to the stiffness of these springs provides the mathematical models for the majority of boundary conditions. This consideration leads to use a two-dimensional analysis of the multiple scales method instead of previous works' method (one-dimensional analysis). If one neglects the nonideal effects, then this paper would be an effort to solve the two-dimensional equations of motion without a need of a combination of these equations using the shortening or stretching effect. Letting the nonideal effects equal to zero and comparing their results with the results of previous approaches have been demonstrated the accuracy of the two-dimensional solutions. The results have been identified the unique effects of constraining and stiffening of boundaries in horizontal, vertical and rotational directions. This means that it is inaccurate to suppose the nonideality of supports only in one or two of these directions like as previous works. The findings are of vital importance as a better prediction of the frequency response for the nonideal supports. Furthermore, the main findings of this effort can help to choose appropriate boundary conditions for desired systems.

  11. Longitudinal Effects of Student-Perceived Classroom Support on Motivation – A Latent Change Model

    PubMed Central

    Lazarides, Rebecca; Raufelder, Diana

    2017-01-01

    This two-wave longitudinal study examined how developmental changes in students’ mastery goal orientation, academic effort, and intrinsic motivation were predicted by student-perceived support of motivational support (support for autonomy, competence, and relatedness) in secondary classrooms. The study extends previous knowledge that showed that support for motivational support in class is related to students’ intrinsic motivation as it focused on the developmental changes of a set of different motivational variables and the relations of these changes to student-perceived motivational support in class. Thus, differential classroom effects on students’ motivational development were investigated. A sample of 1088 German students was assessed in the beginning of the school year when students were in grade 8 (Mean age = 13.70, SD = 0.53, 54% girls) and again at the end of the next school year when students were in grade 9. Results of latent change models showed a tendency toward decline in mastery goal orientation and a significant decrease in academic effort from grade 8 to 9. Intrinsic motivation did not decrease significantly across time. Student-perceived support of competence in class predicted the level and change in students’ academic effort. The findings emphasized that it is beneficial to create classroom learning environments that enhance students’ perceptions of competence in class when aiming to enhance students’ academic effort in secondary school classrooms. PMID:28382012

  12. Longitudinal Effects of Student-Perceived Classroom Support on Motivation - A Latent Change Model.

    PubMed

    Lazarides, Rebecca; Raufelder, Diana

    2017-01-01

    This two-wave longitudinal study examined how developmental changes in students' mastery goal orientation, academic effort, and intrinsic motivation were predicted by student-perceived support of motivational support (support for autonomy, competence, and relatedness) in secondary classrooms. The study extends previous knowledge that showed that support for motivational support in class is related to students' intrinsic motivation as it focused on the developmental changes of a set of different motivational variables and the relations of these changes to student-perceived motivational support in class. Thus, differential classroom effects on students' motivational development were investigated. A sample of 1088 German students was assessed in the beginning of the school year when students were in grade 8 ( Mean age = 13.70, SD = 0.53, 54% girls) and again at the end of the next school year when students were in grade 9. Results of latent change models showed a tendency toward decline in mastery goal orientation and a significant decrease in academic effort from grade 8 to 9. Intrinsic motivation did not decrease significantly across time. Student-perceived support of competence in class predicted the level and change in students' academic effort. The findings emphasized that it is beneficial to create classroom learning environments that enhance students' perceptions of competence in class when aiming to enhance students' academic effort in secondary school classrooms.

  13. Locomotion with Loads: Practical Techniques for Predicting Performance Outcomes

    DTIC Science & Technology

    2014-05-01

    out running velocities by 13 and 18% for all-out 80- and 400 - meter runs. More recently, Alcaraz et al. (2008) reported only 3% reductions in brief...induced decrements in all-out sprint running speeds to be predicted to within 6.0% in both laboratory and field settings. Respective load-carriage...model. Objective Two: Sprint Running Speed Previous Scientific Efforts: The scientific literature on the basis of brief, all-out running

  14. Thermal performance modeling of NASA s scientific balloons

    NASA Astrophysics Data System (ADS)

    Franco, H.; Cathey, H.

    The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.

  15. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  16. Photoactivated methods for enabling cartilage-to-cartilage tissue fixation

    NASA Astrophysics Data System (ADS)

    Sitterle, Valerie B.; Roberts, David W.

    2003-06-01

    The present study investigates whether photoactivated attachment of cartilage can provide a viable method for more effective repair of damaged articular surfaces by providing an alternative to sutures, barbs, or fibrin glues for initial fixation. Unlike artificial materials, biological constructs do not possess the initial strength for press-fitting and are instead sutured or pinned in place, typically inducing even more tissue trauma. A possible alternative involves the application of a photosensitive material, which is then photoactivated with a laser source to attach the implant and host tissues together in either a photothermal or photochemical process. The photothermal version of this method shows potential, but has been almost entirely applied to vascularized tissues. Cartilage, however, exhibits several characteristics that produce appreciable differences between applying and refining these techniques when compared to previous efforts involving vascularized tissues. Preliminary investigations involving photochemical photosensitizers based on singlet oxygen and electron transfer mechanisms are discussed, and characterization of the photodynamic effects on bulk collagen gels as a simplified model system using FTIR is performed. Previous efforts using photothermal welding applied to cartilaginous tissues are reviewed.

  17. Models and theories of prescribing decisions: A review and suggested a new model

    PubMed Central

    Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the ‘persuasion theory - elaboration likelihood model’, the stimuli–response marketing model’, the ‘agency theory’, the theory of planned behaviour,’ and ‘social power theory,’ in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research. PMID:28690701

  18. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  19. Verification of Small Hole Theory for Application to Wire Chaffing Resulting in Shield Faults

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2011-01-01

    Our work is focused upon developing methods for wire chafe fault detection through the use of reflectometry to assess shield integrity. When shielded electrical aircraft wiring first begins to chafe typically the resulting evidence is small hole(s) in the shielding. We are focused upon developing algorithms and the signal processing necessary to first detect these small holes prior to incurring damage to the inner conductors. Our approach has been to develop a first principles physics model combined with probabilistic inference, and to verify this model with laboratory experiments as well as through simulation. Previously we have presented the electromagnetic small-hole theory and how it might be applied to coaxial cable. In this presentation, we present our efforts to verify this theoretical approach with high-fidelity electromagnetic simulations (COMSOL). Laboratory observations are used to parameterize the computationally efficient theoretical model with probabilistic inference resulting in quantification of hole size and location. Our efforts in characterizing faults in coaxial cable are subsequently leading to fault detection in shielded twisted pair as well as analysis of intermittent faulty connectors using similar techniques.

  20. Basic Modeling of the Solar Atmosphere and Spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene; Wagner, William J. (Technical Monitor)

    2003-01-01

    This grant supported the research and publication of a major 26-page paper in The Astrophysical Journal, by Fontenla, Avrett, & Loeser (2002): 'Energy Balance in the Solar Transition Region. IV. Hydrogen and Helium Mass Flows with Diffusion.' This paper extended our previous modeling of the chromosphere-corona transition region to include cases with particle and mass flows. Inflows and outflows were shown to produce striking changes in the profiles of hydrogen and helium lines. An important conclusion is that line shifts are much less significant than the changes in line intensity and central reversal due to the influence of flows on the excitation and ionization of atoms in the solar atmosphere. This modeling effort at SAO is the only current one being undertaken anywhere to simulate in detail the full range of non-LTE absorption, emission, and scattering processes in the solar atmosphere to account for the entire solar spectrum from radio waves to X-rays. This effort is being continued with internal SAO funding at a relatively slow pace. Further NASA support in the future would yield results of great value for the interpretation of solar observations from NASA spacecraft.

  1. Generalization of Filament Braiding Model for Amyloid Fibril Assembly

    NASA Astrophysics Data System (ADS)

    Pope, Maighdlin; Ionescu-Zanetti, Cristian; Khurana, Ritu; Carter, Sue

    2001-03-01

    Research into the formation of amyloid fibrils is motivated by their association with several prominent diseases, among these Alzheimer's Disease, Parkinson's Disease and amyloidosis. Previous work in monitering the aggregation of immunoglobulin light chains to form amyloid fibrils suggests a braided structure where filaments and protofibrils wind together to form Type I and Type II fibrils. Non-contact atomic force microscopy is used to image and explore the kinetics of several other amyloid fibril forming proteins in an effort to generalize the filament braiding model. Included in this study are insulin and the B1 domain of G. Both of these have been shown to form fibrils in vitro. Alpha-synuclein is also included in this study. It is involved in the formation of Lewy bodies in Parkinson's Disease. The fourth protein used in this comparitive study is human amylin that is the cause of a systemic amyloidosis. Results from these four proteins and their associated fibrils are compared to the Ig light chain fibril structure in an effort to show the universality of the filament braiding model.

  2. Progress in Validation of Wind-US for Ramjet/Scramjet Combustion

    NASA Technical Reports Server (NTRS)

    Engblom, William A.; Frate, Franco C.; Nelson, Chris C.

    2005-01-01

    Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.

  3. CFD Analysis of Emissions for a Candidate N+3 Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud

    2015-01-01

    An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spraymodeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.

  4. New approaches for sampling and modeling native and exotic plant species richness

    USGS Publications Warehouse

    Chong, G.W.; Reich, R.M.; Kalkhan, M.A.; Stohlgren, T.J.

    2001-01-01

    We demonstrate new multi-phase, multi-scale approaches for sampling and modeling native and exotic plant species to predict the spread of invasive species and aid in control efforts. Our test site is a 54,000-ha portion of Rocky Mountain National Park, Colorado, USA. This work is based on previous research wherein we developed vegetation sampling techniques to identify hot spots of diversity, important rare habitats, and locations of invasive plant species. Here we demonstrate statistical modeling tools to rapidly assess current patterns of native and exotic plant species to determine which habitats are most vulnerable to invasion by exotic species. We use stepwise multiple regression and modified residual kriging to estimate numbers of native species and exotic species, as well as probability of observing an exotic species in 30 × 30-m cells. Final models accounted for 62% of the variability observed in number of native species, 51% of the variability observed in number of exotic species, and 47% of the variability associated with observing an exotic species. Important independent variables used in developing the models include geographical location, elevation, slope, aspect, and Landsat TM bands 1-7. These models can direct resource managers to areas in need of further inventory, monitoring, and exotic species control efforts.

  5. Representing annotation compositionality and provenance for the Semantic Web

    PubMed Central

    2013-01-01

    Background Though the annotation of digital artifacts with metadata has a long history, the bulk of that work focuses on the association of single terms or concepts to single targets. As annotation efforts expand to capture more complex information, annotations will need to be able to refer to knowledge structures formally defined in terms of more atomic knowledge structures. Existing provenance efforts in the Semantic Web domain primarily focus on tracking provenance at the level of whole triples and do not provide enough detail to track how individual triple elements of annotations were derived from triple elements of other annotations. Results We present a task- and domain-independent ontological model for capturing annotations and their linkage to their denoted knowledge representations, which can be singular concepts or more complex sets of assertions. We have implemented this model as an extension of the Information Artifact Ontology in OWL and made it freely available, and we show how it can be integrated with several prominent annotation and provenance models. We present several application areas for the model, ranging from linguistic annotation of text to the annotation of disease-associations in genome sequences. Conclusions With this model, progressively more complex annotations can be composed from other annotations, and the provenance of compositional annotations can be represented at the annotation level or at the level of individual elements of the RDF triples composing the annotations. This in turn allows for progressively richer annotations to be constructed from previous annotation efforts, the precise provenance recording of which facilitates evidence-based inference and error tracking. PMID:24268021

  6. Tightly Integrating Optical And Inertial Sensors For Navigation Using The UKF

    DTIC Science & Technology

    2008-03-01

    832. September 2004. 3. Brown , Robert Grover and Patrick Y.C. Hwang . Introduction to Random Signals and Applied Kalman Filtering. John Wiley and Sons...effectiveness of fusing imaging and inertial sensors using an Extended Kalman Filter (EKF) algorithm has been shown in previous research efforts. In this...model assumed by the EKF. In order to cope with divergence problem, the Unscented (Sigma-Point) Kalman Filter (UKF) has been proposed in the literature in

  7. Cross Directorate Proposal: Nanostructured Materials for Munitions and Propellants-Production, Modeling, and Characterization

    DTIC Science & Technology

    2016-07-15

    towards hydration and decomposition along with probing their hydration mechanisms, we are now exploring processing and deposition effects for this...oxidizer films and tested for their reactivity. Hydration Mechanism for HI3O8 → HIO3 Previous efforts by our group investigating the hydration ...mechanism of I2O5 → HI3O8 reflected that the hydration mechanism proceeded through a nucleation and growth process followed by a diffusion limited

  8. Breakthroughs in Low Profile Leaky Wave HPM Antennas

    DTIC Science & Technology

    2016-10-17

    3D RF modeling, but the design time and effort will be greatly reduced compared to starting from scratch. The LWAs featured here exhibit beam...Section 4 present related and novel antenna designs that leverage some of the concepts from this research program. Section 5 and Section 6 present...parameters that we used previously for the wire-grill design in Figure 3, but this time with the intent to combine it with an acrylic (εr=2.55) window of

  9. Locomotion with Loads: Practical Techniques for Predicting Performance Outcomes

    DTIC Science & Technology

    2013-05-01

    Lotens (1992) who reported that a load equal to 21% of body weight reduced all-out running velocities by 13 and 18% for all-out 80- and 400 - meter runs...hypothesize second that the speed-load carriage algorithms will allow load- induced decrements in all-out sprint running speeds to be predicted to within...1968; Santee et al., 2001) may then be explored in the context of the model. Objective Two: Sprint Running Speed Previous Scientific Efforts

  10. Terrain and refractivity effects on non-optical paths

    NASA Astrophysics Data System (ADS)

    Barrios, Amalia E.

    1994-07-01

    The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.

  11. Test Results for Entry Guidance Methods for Space Vehicles

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Jones, Robert E.

    2004-01-01

    There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.

  12. Test Results for Entry Guidance Methods for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Jones, Robert E.

    2003-01-01

    There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies (ITAGCT) has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future reusable vehicle concepts.

  13. The VMAT-2 Inhibitor Tetrabenazine Affects Effort-Related Decision Making in a Progressive Ratio/Chow Feeding Choice Task: Reversal with Antidepressant Drugs

    PubMed Central

    Randall, Patrick A.; Lee, Christie A.; Nunes, Eric J.; Yohn, Samantha E.; Nowak, Victoria; Khan, Bilal; Shah, Priya; Pandit, Saagar; Vemuri, V. Kiran; Makriyannis, Alex; Baqi, Younis; Müller, Christa E.; Correa, Merce; Salamone, John D.

    2014-01-01

    Behavioral activation is a fundamental feature of motivation, and organisms frequently make effort-related decisions based upon evaluations of reinforcement value and response costs. Furthermore, people with major depression and other disorders often show anergia, psychomotor retardation, fatigue, and alterations in effort-related decision making. Tasks measuring effort-based decision making can be used as animal models of the motivational symptoms of depression, and the present studies characterized the effort-related effects of the vesicular monoamine transport (VMAT-2) inhibitor tetrabenazine. Tetrabenazine induces depressive symptoms in humans, and also preferentially depletes dopamine (DA). Rats were assessed using a concurrent progressive ratio (PROG)/chow feeding task, in which they can either lever press on a PROG schedule for preferred high-carbohydrate food, or approach and consume a less-preferred lab chow that is freely available in the chamber. Previous work has shown that the DA antagonist haloperidol reduced PROG work output on this task, but did not reduce chow intake, effects that differed substantially from those of reinforcer devaluation or appetite suppressant drugs. The present work demonstrated that tetrabenazine produced an effort-related shift in responding on the PROG/chow procedure, reducing lever presses, highest ratio achieved and time spent responding, but not reducing chow intake. Similar effects were produced by administration of the subtype selective DA antagonists ecopipam (D1) and eticlopride (D2), but not by the cannabinoid CB1 receptor neutral antagonist and putative appetite suppressant AM 4413, which suppressed both lever pressing and chow intake. The adenosine A2A antagonist MSX-3, the antidepressant and catecholamine uptake inhibitor bupropion, and the MAO-B inhibitor deprenyl, all reversed the impairments induced by tetrabenazine. This work demonstrates the potential utility of the PROG/chow procedure as a rodent model of the effort-related deficits observed in depressed patients. PMID:24937131

  14. The VMAT-2 inhibitor tetrabenazine affects effort-related decision making in a progressive ratio/chow feeding choice task: reversal with antidepressant drugs.

    PubMed

    Randall, Patrick A; Lee, Christie A; Nunes, Eric J; Yohn, Samantha E; Nowak, Victoria; Khan, Bilal; Shah, Priya; Pandit, Saagar; Vemuri, V Kiran; Makriyannis, Alex; Baqi, Younis; Müller, Christa E; Correa, Merce; Salamone, John D

    2014-01-01

    Behavioral activation is a fundamental feature of motivation, and organisms frequently make effort-related decisions based upon evaluations of reinforcement value and response costs. Furthermore, people with major depression and other disorders often show anergia, psychomotor retardation, fatigue, and alterations in effort-related decision making. Tasks measuring effort-based decision making can be used as animal models of the motivational symptoms of depression, and the present studies characterized the effort-related effects of the vesicular monoamine transport (VMAT-2) inhibitor tetrabenazine. Tetrabenazine induces depressive symptoms in humans, and also preferentially depletes dopamine (DA). Rats were assessed using a concurrent progressive ratio (PROG)/chow feeding task, in which they can either lever press on a PROG schedule for preferred high-carbohydrate food, or approach and consume a less-preferred lab chow that is freely available in the chamber. Previous work has shown that the DA antagonist haloperidol reduced PROG work output on this task, but did not reduce chow intake, effects that differed substantially from those of reinforcer devaluation or appetite suppressant drugs. The present work demonstrated that tetrabenazine produced an effort-related shift in responding on the PROG/chow procedure, reducing lever presses, highest ratio achieved and time spent responding, but not reducing chow intake. Similar effects were produced by administration of the subtype selective DA antagonists ecopipam (D1) and eticlopride (D2), but not by the cannabinoid CB1 receptor neutral antagonist and putative appetite suppressant AM 4413, which suppressed both lever pressing and chow intake. The adenosine A2A antagonist MSX-3, the antidepressant and catecholamine uptake inhibitor bupropion, and the MAO-B inhibitor deprenyl, all reversed the impairments induced by tetrabenazine. This work demonstrates the potential utility of the PROG/chow procedure as a rodent model of the effort-related deficits observed in depressed patients.

  15. Formal verification of a microcoded VIPER microprocessor using HOL

    NASA Technical Reports Server (NTRS)

    Levitt, Karl; Arora, Tejkumar; Leung, Tony; Kalvala, Sara; Schubert, E. Thomas; Windley, Philip; Heckman, Mark; Cohen, Gerald C.

    1993-01-01

    The Royal Signals and Radar Establishment (RSRE) and members of the Hardware Verification Group at Cambridge University conducted a joint effort to prove the correspondence between the electronic block model and the top level specification of Viper. Unfortunately, the proof became too complex and unmanageable within the given time and funding constraints, and is thus incomplete as of the date of this report. This report describes an independent attempt to use the HOL (Cambridge Higher Order Logic) mechanical verifier to verify Viper. Deriving from recent results in hardware verification research at UC Davis, the approach has been to redesign the electronic block model to make it microcoded and to structure the proof in a series of decreasingly abstract interpreter levels, the lowest being the electronic block level. The highest level is the RSRE Viper instruction set. Owing to the new approach and some results on the proof of generic interpreters as applied to simple microprocessors, this attempt required an effort approximately an order of magnitude less than the previous one.

  16. Bighorn sheep habitat studies, population dynamics, and population modeling in Bighorn Canyon National Recreation Area, Wyoming and Montana, 2000-2003

    USGS Publications Warehouse

    Singer, Francis J.; Schoenecker, Kathryn A.

    2004-01-01

    The bighorn sheep population of the greater Bighorn Canyon National Recreation Area (BICA) was extirpated in the 1800s, and then reintroduced in 1973. The herd increased to a peak population of about 211 animals (Kissell and others, 1996), but then declined sharply in 1995 and 1996. Causes for the decline were unknown. Numbers have remained around 100 ± 20 animals since 1998. Previous modeling efforts determined what areas were suitable bighorn sheep habitat (Gudorf and others, 1996). We tried to determine why sheep were not using areas that were modeled as suitable or acceptable habitat, and to evaluate population dynamics of the herd.

  17. Economic model for QoS guarantee on the Internet

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Wei, Jiaolong

    2001-09-01

    This paper describes a QoS guarantee architecture suited for best-effort environments, based on ideas from microeconomics and non-cooperative game theory. First, an analytic model is developed for the study of the resource allocation in the Internet. Then we show that with a simple pricing mechanism (from network implementation and users' points-of-view), we were able to provide QoS guarantee at per flow level without resource allocation or complicated scheduling mechanisms or maintaining per flow state in the core network. Unlike the previous work on this area, we extend the basic model to support inelastic applications which require minimum bandwidth guarantees for a given time period by introducing derivative market.

  18. Rényi entropy and Lempel-Ziv complexity of mechanomyographic recordings of diaphragm muscle as indexes of respiratory effort.

    PubMed

    Torres, Abel; Fiz, Jose A; Jane, Raimon; Laciar, Eric; Galdiz, Juan B; Gea, Joaquim; Morera, Josep

    2008-01-01

    The study of the mechanomyographic (MMG) signals of respiratory muscles is a promising technique in order to evaluate the respiratory muscles effort. A new approach for quantifying the relationship between respiratory MMG signals and respiratory effort is presented by analyzing the spatio-temporal patterns in the MMG signal using two non-linear methods: Rényi entropy and Lempel-Ziv (LZ) complexity analysis. Both methods are well suited to the analysis of non-stationary biomedical signals of short length. In this study, MMG signals of the diaphragm muscle acquired by means of a capacitive accelerometer applied on the costal wall were analyzed. The method was tested on an animal model (dogs), and the diaphragmatic MMG signal was recorded continuously while two non anesthetized mongrel dogs performed a spontaneous ventilation protocol with an incremental inspiratory load. The performance in discriminating high and low respiratory effort levels with these two methods was analyzed with the evaluation of the Pearson correlation coefficient between the MMG parameters and respiratory effort parameters extracted from the inspiratory pressure signal. The results obtained show an increase of the MMG signal Rényi entropy and LZ complexity values with the increase of the respiratory effort. Compared with other parameters analyzed in previous works, both Rényi entropy and LZ complexity indexes demonstrates better performance in all the signals analyzed. Our results suggest that these non-linear techniques are useful to detect and quantify changes in the respiratory effort by analyzing MMG respiratory signals.

  19. Modelling HIV/AIDS epidemics in sub-Saharan Africa using seroprevalence data from antenatal clinics.

    PubMed Central

    Salomon, J. A.; Murray, C. J.

    2001-01-01

    OBJECTIVE: To improve the methodological basis for modelling the HIV/AIDS epidemics in adults in sub-Saharan Africa, with examples from Botswana, Central African Republic, Ethiopia, and Zimbabwe. Understanding the magnitude and trajectory of the HIV/AIDS epidemic is essential for planning and evaluating control strategies. METHODS: Previous mathematical models were developed to estimate epidemic trends based on sentinel surveillance data from pregnant women. In this project, we have extended these models in order to take full advantage of the available data. We developed a maximum likelihood approach for the estimation of model parameters and used numerical simulation methods to compute uncertainty intervals around the estimates. FINDINGS: In the four countries analysed, there were an estimated half a million new adult HIV infections in 1999 (range: 260 to 960 thousand), 4.7 million prevalent infections (range: 3.0 to 6.6 million), and 370 thousand adult deaths from AIDS (range: 266 to 492 thousand). CONCLUSION: While this project addresses some of the limitations of previous modelling efforts, an important research agenda remains, including the need to clarify the relationship between sentinel data from pregnant women and the epidemiology of HIV and AIDS in the general population. PMID:11477962

  20. Effects of video modelling on emerging speech in an adult with traumatic brain injury: preliminary findings.

    PubMed

    Nikopoulos, Christos K; Nikopoulou-Smyrni, Panagiota; Konstantopoulos, Kostas

    2013-01-01

    Research has shown that traumatic brain injury (TBI) can affect a person's ability to perform previously learned skills. Dysexecutive syndrome and inattention, for example, alongside a number of other cognitive and behavioural impairments such as memory loss and lack of motivation, significantly affect day-to-day functioning following TBI. This study examined the efficacy of video modelling in emerging speech in an adult male with TBI caused by an assault. In an effort to identify functional relations between this novice intervention and the target behaviour, experimental control was achieved by using within-system research methodology, overcoming difficulties of forming groups for such an highly non-homogeneous population. Across a number of conditions, the participant watched a videotape in which another adult modelled a selection of 19 spoken words. When this modelled behaviour was performed in vivo, then generalization across 76 other words in the absence of a videotape took place. It was revealed that video modelling can promote the performance of previously learned behaviours related to speech, but more significantly it can facilitate the generalization of this verbal behaviour across untrained words. Video modelling could well be added within the rehabilitation programmes for this population.

  1. Dual measurement self-sensing technique of NiTi actuators for use in robust control

    NASA Astrophysics Data System (ADS)

    Gurley, Austin; Lambert, Tyler Ross; Beale, David; Broughton, Royall

    2017-10-01

    Using a shape memory alloy actuator as both an actuator and a sensor provides huge benefits in cost reduction and miniaturization of robotic devices. Despite much effort, reliable and robust self-sensing (using the actuator as a position sensor) had not been achieved for general temperature, loading, hysteresis path, and fatigue conditions. Prior research has sought to model the intricacies of the electrical resistivity changes within the NiTi material. However, for the models to be solvable, nearly every previous technique only models the actuator within very specific boundary conditions. Here, we measure both the voltage across the entire NiTi wire and of a fixed-length segment of it; these dual measurements allow direct calculation of the actuator length without a material model. We review previous self-sensing literature, illustrate the mechanism design that makes the new technique possible, and use the dual measurement technique to determine the length of a single straight wire actuator under controlled conditions. This robust measurement can be used for feedback control in unknown ambient and loading conditions.

  2. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  3. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  4. Enhanced Core Noise Modeling for Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.

    2011-01-01

    This report describes work performed by MTC Technologies (MTCT) for NASA Glenn Research Center (GRC) under Contract NAS3-00178, Task Order No. 15. MTCT previously developed a first-generation empirical model that correlates the core/combustion noise of four GE engines, the CF6, CF34, CFM56, and GE90 for General Electric (GE) under Contract No. 200-1X-14W53048, in support of GRC Contract NAS3-01135. MTCT has demonstrated in earlier noise modeling efforts that the improvement of predictive modeling is greatly enhanced by an iterative approach, so in support of NASA's Quiet Aircraft Technology Project, GRC sponsored this effort to improve the model. Since the noise data available for correlation are total engine noise spectra, it is total engine noise that must be predicted. Since the scope of this effort was not sufficient to explore fan and turbine noise, the most meaningful comparisons must be restricted to frequencies below the blade passage frequency. Below the blade passage frequency and at relatively high power settings jet noise is expected to be the dominant source, and comparisons are shown that demonstrate the accuracy of the jet noise model recently developed by MTCT for NASA under Contract NAS3-00178, Task Order No. 10. At lower power settings the core noise became most apparent, and these data corrected for the contribution of jet noise were then used to establish the characteristics of core noise. There is clearly more than one spectral range where core noise is evident, so the spectral approach developed by von Glahn and Krejsa in 1982 wherein four spectral regions overlap, was used in the GE effort. Further analysis indicates that the two higher frequency components, which are often somewhat masked by turbomachinery noise, can be treated as one component, and it is on that basis that the current model is formulated. The frequency scaling relationships are improved and are now based on combustor and core nozzle geometries. In conjunction with the Task Order No. 10 jet noise model, this core noise model is shown to provide statistical accuracy comparable to the jet noise model for frequencies below blade passage. This model is incorporated in the NASA FOOTPR code and a user s guide is provided.

  5. Complementary roles of different oscillatory activities in the subthalamic nucleus in coding motor effort in Parkinsonism.

    PubMed

    Tan, Huiling; Pogosyan, Alek; Anzak, Anam; Ashkan, Keyoumars; Bogdanovic, Marko; Green, Alexander L; Aziz, Tipu; Foltynie, Thomas; Limousin, Patricia; Zrinzo, Ludvic; Brown, Peter

    2013-10-01

    The basal ganglia may play an important role in the control of motor scaling or effort. Recently local field potential (LFP) recordings from patients with deep brain stimulation electrodes in the basal ganglia have suggested that local increases in the synchronisation of neurons in the gamma frequency band may correlate with force or effort. Whether this feature uniquely codes for effort and whether such a coding mechanism holds true over a range of efforts is unclear. Here we investigated the relationship between frequency-specific oscillatory activities in the subthalamic nucleus (STN) and manual grips made with different efforts. The latter were self-rated using the 10 level Borg scale ranging from 0 (no effort) to 10 (maximal effort). STN LFP activities were recorded in patients with Parkinson's Disease (PD) who had undergone functional surgery. Patients were studied while motor performance was improved by dopaminergic medication. In line with previous studies we observed power increase in the theta/alpha band (4-12 Hz), power suppression in the beta band (13-30 Hz) and power increase in the gamma band (55-90 Hz) and high frequency band (101-375 Hz) during voluntary grips. Beta suppression deepened, and then reached a floor level as effort increased. Conversely, gamma and high frequency power increases were enhanced during grips made with greater effort. Multiple regression models incorporating the four different spectral changes confirmed that the modulation of power in the beta band was the only independent predictor of effort during grips made with efforts rated <5. In contrast, increases in gamma band activity were the only independent predictor of effort during grips made with efforts ≥5. Accordingly, the difference between power changes in the gamma and beta bands correlated with effort across all effort levels. These findings suggest complementary roles for changes in beta and gamma band activities in the STN in motor effort coding. The latter function is thought to be impaired in untreated PD where task-related reactivity in these two bands is deficient. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  6. SLS Navigation Model-Based Design Approach

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed

  7. Spatial variability in nutrient transport by HUC8, state, and subbasin based on Mississippi/Atchafalaya River Basin SPARROW models

    USGS Publications Warehouse

    Robertson, Dale M.; Saad, David A.; Schwarz, Gregory E.

    2014-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. With geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and monitored loads throughout the MARB, SPAtially Referenced Regression On Watershed attributes (SPARROW) watershed models were constructed specifically for the MARB, which reduced simulation errors from previous models. Based on these models, N loads/yields were highest from the central part (centered over Iowa and Indiana) of the MARB (Corn Belt), and the highest P yields were scattered throughout the MARB. Spatial differences in yields from previous studies resulted from different descriptions of the dominant sources (N yields are highest with crop-oriented agriculture and P yields are highest with crop and animal agriculture and major WWTPs) and different descriptions of downstream transport. Delivered loads/yields from the MARB SPARROW models are used to rank subbasins, states, and eight-digit Hydrologic Unit Code basins (HUC8s) by N and P contributions and then rankings are compared with those from other studies. Changes in delivered yields result in an average absolute change of 1.3 (N) and 1.9 (P) places in state ranking and 41 (N) and 69 (P) places in HUC8 ranking from those made with previous national-scale SPARROW models. This information may help managers decide where efforts could have the largest effects (highest ranked areas) and thus reduce hypoxia in the Gulf of Mexico.

  8. Forecasting paediatric malaria admissions on the Kenya Coast using rainfall.

    PubMed

    Karuri, Stella Wanjugu; Snow, Robert W

    2016-01-01

    Malaria is a vector-borne disease which, despite recent scaled-up efforts to achieve control in Africa, continues to pose a major threat to child survival. The disease is caused by the protozoan parasite Plasmodium and requires mosquitoes and humans for transmission. Rainfall is a major factor in seasonal and secular patterns of malaria transmission along the East African coast. The goal of the study was to develop a model to reliably forecast incidences of paediatric malaria admissions to Kilifi District Hospital (KDH). In this article, we apply several statistical models to look at the temporal association between monthly paediatric malaria hospital admissions, rainfall, and Indian Ocean sea surface temperatures. Trend and seasonally adjusted, marginal and multivariate, time-series models for hospital admissions were applied to a unique data set to examine the role of climate, seasonality, and long-term anomalies in predicting malaria hospital admission rates and whether these might become more or less predictable with increasing vector control. The proportion of paediatric admissions to KDH that have malaria as a cause of admission can be forecast by a model which depends on the proportion of malaria admissions in the previous 2 months. This model is improved by incorporating either the previous month's Indian Ocean Dipole information or the previous 2 months' rainfall. Surveillance data can help build time-series prediction models which can be used to anticipate seasonal variations in clinical burdens of malaria in stable transmission areas and aid the timing of malaria vector control.

  9. The Influence of Chronic Ego Depletion on Goal Adherence: An Experience Sampling Study.

    PubMed

    Wang, Ligang; Tao, Ting; Fan, Chunlei; Gao, Wenbin; Wei, Chuguang

    2015-01-01

    Although ego depletion effects have been widely observed in experiments in which participants perform consecutive self-control tasks, the process of ego depletion remains poorly understood. Using the strength model of self-control, we hypothesized that chronic ego depletion adversely affects goal adherence and that mental effort and motivation are involved in the process of ego depletion. In this study, 203 students reported their daily performance, mental effort, and motivation with respect to goal directed behavior across a 3-week time period. People with high levels of chronic ego depletion were less successful in goal adherence than those with less chronic ego depletion. Although daily effort devoted to goal adherence increased with chronic ego depletion, motivation to adhere to goals was not affected. Participants with high levels of chronic ego depletion showed a stronger positive association between mental effort and performance, but chronic ego depletion did not play a regulatory role in the effect of motivation on performance. Chronic ego depletion increased the likelihood of behavior regulation failure, suggesting that it is difficult for people in an ego-depletion state to adhere to goals. We integrate our results with the findings of previous studies and discuss possible theoretical implications.

  10. The Influence of Chronic Ego Depletion on Goal Adherence: An Experience Sampling Study

    PubMed Central

    Wang, Ligang; Tao, Ting; Fan, Chunlei; Gao, Wenbin; Wei, Chuguang

    2015-01-01

    Although ego depletion effects have been widely observed in experiments in which participants perform consecutive self-control tasks, the process of ego depletion remains poorly understood. Using the strength model of self-control, we hypothesized that chronic ego depletion adversely affects goal adherence and that mental effort and motivation are involved in the process of ego depletion. In this study, 203 students reported their daily performance, mental effort, and motivation with respect to goal directed behavior across a 3-week time period. People with high levels of chronic ego depletion were less successful in goal adherence than those with less chronic ego depletion. Although daily effort devoted to goal adherence increased with chronic ego depletion, motivation to adhere to goals was not affected. Participants with high levels of chronic ego depletion showed a stronger positive association between mental effort and performance, but chronic ego depletion did not play a regulatory role in the effect of motivation on performance. Chronic ego depletion increased the likelihood of behavior regulation failure, suggesting that it is difficult for people in an ego-depletion state to adhere to goals. We integrate our results with the findings of previous studies and discuss possible theoretical implications. PMID:26562839

  11. Need for recovery from work and sleep-related complaints among nursing professionals.

    PubMed

    Silva-Costa, Aline; Griep, Rosane Harter; Fischer, Frida Marina; Rotenberg, Lúcia

    2012-01-01

    The concept of need for recovery from work (NFR) was deduced from the effort recuperation model. In this model work produces costs in terms of effort during the working day. When there is enough time and possibilities to recuperate, a worker will arrive at the next working day with no residual symptoms of previous effort. NFR evaluates work characteristics such as psychosocial demands, professional work hours or schedules. However, sleep may be an important part of the recovery process. The aim of the study was to test the association between sleep-related complaints and NFR. A cross-sectional study was carried out at three hospitals. All females nursing professionals engaged in assistance to patients were invited to participate (N = 1,307). Participants answered a questionnaire that included four sleep-related complaints (insomnia, unsatisfactory sleep, sleepiness during work hours and insufficient sleep), work characteristics and NRF scale. Binomial logistic regression analysis showed that all sleep-related complaints are associated with a high need for recovery from work. Those who reported insufficient sleep showed a greater chance of high need for recovery; OR=2.730 (CI 95% 2.074 - 3.593). These results corroborate the hypothesis that sleep is an important aspect of the recovery process and, therefore, should be thoroughly investigated.

  12. What predicts dissemination efforts among public health researchers in the United States?

    PubMed

    Tabak, Rachel G; Stamatakis, Katherine A; Jacobs, Julie A; Brownson, Ross C

    2014-01-01

    We identified factors related to dissemination efforts by researchers to non-research audiences to reduce the gap between research generation and uptake in public health practice. We conducted a cross-sectional study of 266 researchers at universities, the National Institutes of Health (NIH), and CDC. We identified scientists using a search of public health journals and lists from government-sponsored research. The scientists completed a 35-item online survey in 2012. Using multivariable logistic regression, we compared self-rated effort to disseminate findings to non-research audiences (excellent/good vs. poor) across predictor variables in three categories: perceptions or reasons to disseminate, perceived expectation by employer/funders, and professional training and experience. One-third of researchers rated their dissemination efforts as poor. Many factors were significantly related to whether a researcher rated him/herself as excellent/good, including obligation to disseminate findings (odds ratio [OR] = 2.7, 95% confidence interval [CI] 1.1, 6.8), dissemination important for their department (OR=2.3, 95% CI 1.2, 4.5), dissemination expected by employer (OR=2.0, 95% CI 1.2, 3.2) or by funder (OR=2.1, 95% CI 1.3, 3.2), previous work in a practice/policy setting (OR=4.4, 95% CI 2.1, 9.3), and university researchers with Prevention Research Center affiliation vs. NIH researchers (OR=4.7, 95% CI 1.4, 15.7). With all variables in the model, dissemination expected by funder (OR=2.0, 95% CI 1.2, 3.1) and previous work in a practice/policy setting (OR=3.5, OR 1.7, 7.1) remained significant. These findings support the need for structural changes to the system, including funding agency priorities and participation of researchers in practice- and policy-based experiences, which may enhance efforts to disseminate by researchers.

  13. Population modeling for furbearer management

    USGS Publications Warehouse

    Johnson, D.H.; Sanderson, G.C.

    1982-01-01

    The management of furbearers has become increasingly complex as greater demands are placed on their populations. Correspondingly, needs for information to use in management have increased. Inadequate information leads the manager to err on the conservative side; unless the size of the 'harvestable surplus' is known, the population cannot be fully exploited. Conversely, information beyond what is needed becomes an unaffordable luxury. Population modeling has proven useful for organizing information on numerous game animals. Modeling serves to determine if information of the right kind and proper amount is being gathered; systematizes data collection, data interpretation, and decision making; and permits more effective management and better utilization of game populations. This report briefly reviews the principles of population modeling, describes what has been learned from previous modeling efforts on furbearers, and outlines the potential role of population modeling in furbearer management.

  14. User Manual for the NASA Glenn Ice Accretion Code LEWICE. Version 2.2.2

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2002-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.2.2 of this code, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A of this report has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  15. Testing and Characterization of a Prototype Telescope for the Evolved Laser Interferometer Space Antenna (eLISA)

    NASA Technical Reports Server (NTRS)

    Sankar, S.; Livas, J.

    2016-01-01

    We describe our efforts to fabricate, test and characterize a prototype telescope for the eLISA mission. Much of our work has centered on the modeling and measurement of scattered light performance. This work also builds on a previous demonstration of a high dimensional stability metering structure using particular choices of materials and interfaces. We will discuss ongoing plans to merge these two separate demonstrations into a single telescope design demonstrating both stray light and dimensional stability requirements simultaneously.

  16. Putting Out the Fire in Afghanistan, The Fire Model of Counterinsurgency: Focusing Efforts to Make an Insurgency Unsustainable

    DTIC Science & Technology

    2009-11-30

    synthesizes the results from the previous chapters, determines the benefits of this new approach to planners and practitioners, and addresses concerns that...who has power and money, and what is required to receive those benefits . The U.S. spent unnecessary time and expense because it did not understand the...Completion Would Benefit Entire Region,"Radio Free Europe. October 10, 2007. http://www.rferl.org/content/article/1078916.html (accessed October 10

  17. Identifying western yellow-billed cuckoo breeding habitat with a dual modelling approach

    USGS Publications Warehouse

    Johnson, Matthew J.; Hatten, James R.; Holmes, Jennifer A.; Shafroth, Patrick B.

    2017-01-01

    The western population of the yellow-billed cuckoo (Coccyzus americanus) was recently listed as threatened under the federal Endangered Species Act. Yellow-billed cuckoo conservation efforts require the identification of features and area requirements associated with high quality, riparian forest habitat at spatial scales that range from nest microhabitat to landscape, as well as lower-suitability areas that can be enhanced or restored. Spatially explicit models inform conservation efforts by increasing ecological understanding of a target species, especially at landscape scales. Previous yellow-billed cuckoo modelling efforts derived plant-community maps from aerial photography, an expensive and oftentimes inconsistent approach. Satellite models can remotely map vegetation features (e.g., vegetation density, heterogeneity in vegetation density or structure) across large areas with near perfect repeatability, but they usually cannot identify plant communities. We used aerial photos and satellite imagery, and a hierarchical spatial scale approach, to identify yellow-billed cuckoo breeding habitat along the Lower Colorado River and its tributaries. Aerial-photo and satellite models identified several key features associated with yellow-billed cuckoo breeding locations: (1) a 4.5 ha core area of dense cottonwood-willow vegetation, (2) a large native, heterogeneously dense forest (72 ha) around the core area, and (3) moderately rough topography. The odds of yellow-billed cuckoo occurrence decreased rapidly as the amount of tamarisk cover increased or when cottonwood-willow vegetation was limited. We achieved model accuracies of 75–80% in the project area the following year after updating the imagery and location data. The two model types had very similar probability maps, largely predicting the same areas as high quality habitat. While each model provided unique information, a dual-modelling approach provided a more complete picture of yellow-billed cuckoo habitat requirements and will be useful for management and conservation activities.

  18. 3D-Pharmacophore Identification for κ-Opioid Agonists Using Ligand-Based Drug-Design Techniques

    NASA Astrophysics Data System (ADS)

    Yamaotsu, Noriyuki; Hirono, Shuichi

    A selective κ-opioid receptor (KOR) agonist might act as a powerful analgesic without the side effects of μ-opioid receptor-selective drugs such as morphine. The eight classes of known KOR agonists have different chemical structures, making it difficult to construct a pharmacophore model that takes them all into account. Here, we summarize previous efforts to identify the pharmacophore for κ-opioid agonists and propose a new three-dimensional pharmacophore model that encompasses the κ-activities of all classes. This utilizes conformational sampling of agonists by high-temperature molecular dynamics and pharmacophore extraction through a series of molecular superpositions.

  19. Development of a liquid metal slip ring

    NASA Technical Reports Server (NTRS)

    Weinberger, S. M.

    1972-01-01

    A liquid metal slip ring/solar orientation mechanism was designed and a model tested. This was a follow-up of previous efforts for the development of a gallium liquid metal slip ring in which the major problem was the formation and ejection of debris. A number of slip ring design approaches were studied. The probe design concept was fully implemented with detail drawings and a model was successfully tested for dielectric strength, shock vibration, acceleration and operation. The conclusions are that a gallium liquid metal slip ring/solar orientation mechanism is feasible and that the problem of debris formation and ejection has been successfully solved.

  20. Modelling body weight, dieting and obesity traps

    NASA Astrophysics Data System (ADS)

    Barbieri, Paolo Nicola

    2017-02-01

    This paper presents a theoretical investigation into why losing weight is so difficult even in the absence of rational addiction, time-inconsistent preferences or bounded rationality. We add to the existing literature by focusing on the role that individual metabolism has on weight loss. The results from the theoretical model provide multiple steady states and a threshold revealing a situation of "obesity traps" that the individual must surpass in order to successfully lose weight. Any weight-loss efforts that the individual undertakes have to surpass such threshold in order to result in permanent weight loss, otherwise the individual will gradually regain weight and converge to his or her previous body weight.

  1. Modeling the Health and Economic Burden of Hepatitis C Virus in Switzerland.

    PubMed

    Müllhaupt, Beat; Bruggmann, Philip; Bihl, Florian; Blach, Sarah; Lavanchy, Daniel; Razavi, Homie; Semela, David; Negro, Francesco

    2015-01-01

    Chronic hepatitis C virus infection is a major cause of liver disease in Switzerland and carries a significant cost burden. Currently, only conservative strategies are in place to mitigate the burden of hepatitis C in Switzerland. This study expands on previously described modeling efforts to explore the impact of: no treatment, and treatment to reduce HCC and mortality. Furthermore, the costs associated with untreated HCV were modeled. Hepatitis C disease progression and mortality were modeled. Baseline historical assumptions were collected from the literature and expert interviews and strategies were developed to show the impact of different levels of intervention (improved drug cure rates, treatment and diagnosis) until 2030. Under the historical standard of care, the number of advanced stage cases was projected to increase until 2030, at which point the annual economic burden of untreated viremic infections was projected to reach €96.8 (95% Uncertainty Interval: €36 - €232) million. Scenarios to reduce HCV liver-related mortality by 90% by 2030 required treatment of 4,190 ≥F2 or 3,200 ≥F3 patients annually by 2018 using antivirals with a 95% efficacy rate. Delaying the implementation of these scenarios by 2 or 5 years reduced the impact on mortality to 75% and 57%, respectively. With today's treatment efficacy and uptake rates, hepatitis C disease burden is expected to increase through 2030. A substantial reduction in disease burden can be achieved by means of both higher efficacy drugs and increased treatment uptake. However, these efforts cannot be undertaken without a simultaneous effort to diagnose more infections.

  2. Prediction of new ground-state crystal structure of T a2O5

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Kawazoe, Yoshiyuki

    2018-03-01

    Tantalum pentoxide (T a2O5 ) is a wide-gap semiconductor which has important technological applications. Despite the enormous efforts from both experimental and theoretical studies, the ground-state crystal structure of T a2O5 is not yet uniquely determined. Based on first-principles calculations in combination with evolutionary algorithm, we identify a triclinic phase of T a2O5 , which is energetically much more stable than any phases or structural models reported previously. Characterization of the static and dynamical properties of the phase reveals the common features shared with previous metastable phases of T a2O5 . In particular, we show that the d spacing of ˜3.8 Å found in the x-ray diffraction patterns of many previous experimental works is actually the radius of the second Ta-Ta coordination shell as defined by radial distribution functions.

  3. A Thermo-Poromechanics Finite Element Model for Predicting Arterial Tissue Fusion

    NASA Astrophysics Data System (ADS)

    Fankell, Douglas P.

    This work provides modeling efforts and supplemental experimental work performed towards the ultimate goal of modeling heat transfer, mass transfer, and deformation occurring in biological tissue, in particular during arterial fusion and cutting. Developing accurate models of these processes accomplishes two goals. First, accurate models would enable engineers to design devices to be safer and less expensive. Second, the mechanisms behind tissue fusion and cutting are widely unknown; models with the ability to accurately predict physical phenomena occurring in the tissue will allow for insight into the underlying mechanisms of the processes. This work presents three aims and the efforts in achieving them, leading to an accurate model of tissue fusion and more broadly the thermo-poromechanics (TPM) occurring within biological tissue. Chapters 1 and 2 provide the motivation for developing accurate TPM models of biological tissue and an overview of previous modeling efforts. In Chapter 3, a coupled thermo-structural finite element (FE) model with the ability to predict arterial cutting is offered. From the work presented in Chapter 3, it became obvious a more detailed model was needed. Chapter 4 meets this need by presenting small strain TPM theory and its implementation in an FE code. The model is then used to simulate thermal tissue fusion. These simulations show the model's promise in predicting the water content and temperature of arterial wall tissue during the fusion process, but it is limited by its small deformation assumptions. Chapters 5-7 attempt to address this limitation by developing and implementing a large deformation TPM FE model. Chapters 5, 6, and 7 present a thermodynamically consistent, large deformation TPM FE model and its ability to simulate tissue fusion. Ultimately, this work provides several methods of simulating arterial tissue fusion and the thermo-poromechanics of biological tissue. It is the first work, to the author's knowledge, to simulate the fully coupled TPM of biological tissue and the first to present a fully coupled large deformation TPM FE model. In doing so, a stepping stone for more advanced modeling of biological tissue has been laid.

  4. Climate Modeling and Causal Identification for Sea Ice Predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    This project aims to better understand causes of ongoing changes in the Arctic climate system, particularly as decreasing sea ice trends have been observed in recent decades and are expected to continue in the future. As part of the Sea Ice Prediction Network, a multi-agency effort to improve sea ice prediction products on seasonal-to-interannual time scales, our team is studying sensitivity of sea ice to a collection of physical process and feedback mechanism in the coupled climate system. During 2017 we completed a set of climate model simulations using the fully coupled ACME-HiLAT model. The simulations consisted of experiments inmore » which cloud, sea ice, and air-ocean turbulent exchange parameters previously identified as important for driving output uncertainty in climate models were perturbed to account for parameter uncertainty in simulated climate variables. We conducted a sensitivity study to these parameters, which built upon a previous study we made for standalone simulations (Urrego-Blanco et al., 2016, 2017). Using the results from the ensemble of coupled simulations, we are examining robust relationships between climate variables that emerge across the experiments. We are also using causal discovery techniques to identify interaction pathways among climate variables which can help identify physical mechanisms and provide guidance in predictability studies. This work further builds on and leverages the large ensemble of standalone sea ice simulations produced in our previous w14_seaice project.« less

  5. Application of simple mathematical expressions to relate the half-lives of xenobiotics in rats to values in humans.

    PubMed

    Ward, Keith W; Erhardt, Paul; Bachmann, Kenneth

    2005-01-01

    Previous publications from GlaxoSmithKline and University of Toledo laboratories convey our independent attempts to predict the half-lives of xenobiotics in humans using data obtained from rats. The present investigation was conducted to compare the performance of our published models against a common dataset obtained by merging the two sets of rat versus human half-life (hHL) data previously used by each laboratory. After combining data, mathematical analyses were undertaken by deploying both of our previous models, namely the use of an empirical algorithm based on a best-fit model and the use of rat-to-human liver blood flow ratios as a half-life correction factor. Both qualitative and quantitative analyses were performed, as well as evaluation of the impact of molecular properties on predictability. The merged dataset was remarkably diverse with respect to physiochemical and pharmacokinetic (PK) properties. Application of both models revealed similar predictability, depending upon the measure of stipulated accuracy. Certain molecular features, particularly rotatable bond count and pK(a), appeared to influence the accuracy of prediction. This collaborative effort has resulted in an improved understanding and appreciation of the value of rats to serve as a surrogate for the prediction of xenobiotic half-lives in humans when clinical pharmacokinetic studies are not possible or practicable.

  6. Obesity and severe obesity forecasts through 2030.

    PubMed

    Finkelstein, Eric A; Khavjou, Olga A; Thompson, Hope; Trogdon, Justin G; Pan, Liping; Sherry, Bettylou; Dietz, William

    2012-06-01

    Previous efforts to forecast future trends in obesity applied linear forecasts assuming that the rise in obesity would continue unabated. However, evidence suggests that obesity prevalence may be leveling off. This study presents estimates of adult obesity and severe obesity prevalence through 2030 based on nonlinear regression models. The forecasted results are then used to simulate the savings that could be achieved through modestly successful obesity prevention efforts. The study was conducted in 2009-2010 and used data from the 1990 through 2008 Behavioral Risk Factor Surveillance System (BRFSS). The analysis sample included nonpregnant adults aged ≥ 18 years. The individual-level BRFSS variables were supplemented with state-level variables from the U.S. Bureau of Labor Statistics, the American Chamber of Commerce Research Association, and the Census of Retail Trade. Future obesity and severe obesity prevalence were estimated through regression modeling by projecting trends in explanatory variables expected to influence obesity prevalence. Linear time trend forecasts suggest that by 2030, 51% of the population will be obese. The model estimates a much lower obesity prevalence of 42% and severe obesity prevalence of 11%. If obesity were to remain at 2010 levels, the combined savings in medical expenditures over the next 2 decades would be $549.5 billion. The study estimates a 33% increase in obesity prevalence and a 130% increase in severe obesity prevalence over the next 2 decades. If these forecasts prove accurate, this will further hinder efforts for healthcare cost containment. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Stratigraphic Mapping of Intra-Crater Layered Deposits in Arabia Terra from High-Resolution Imaging and Stereo Topography

    NASA Astrophysics Data System (ADS)

    Annex, A. M.; Lewis, K. W.; Edwards, C. S.

    2017-12-01

    The Arabia Terra region of Mars, located in the mid-latitudes, hosts a number of crater basins with exposed sedimentary layers and buttes. Our work builds upon previous studies of these sites that suggest that the layers are formed of weakly lithified aeolian material with quasi-periodic expressions explained by changes in planetary orbital elements during formation (Lewis and Aharonson, 2014; Cadieux and Kah, 2015; Stack et al., 2013). In an effort to better understand differences in lateral continuity of these layers, both between and within basins, an extensive mapping effort was conducted on several sites in Arabia Terra with HiRISE stereo targets. Digital terrain models produced using the Ames Stereo Pipeline were mapped to derive bedding plane positions and orientations for each stratum using linear regression. Bed thicknesses were derived from differences in dip-corrected elevation between successive strata. Our study includes additional independent mapping within craters analyzed in previous studies, and expands mapping of these deposits to several new craters in the region unique to this effort. Our sample size in this study is large, including over 700 individually measured strata from multiple sections within each crater. Although bed thicknesses are generally tightly distributed around 12 meters, any changes within a sequence could represent variations in either the dominant forcing factors controlling deposition and/or changes in sedimentation rate. If craters contain correlative sequences, these types of changes could serve as marker horizons across the region with further mapping.

  8. Technologies for Nondestructive Evaluation of Surfaces and Thin Coatings

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The effort included in this project included several related activities encompassing basic understanding, technological development, customer identification and commercial transfer of several methodologies for nondestructive evaluation of surfaces and thin surface coatings. Consistent with the academic environment, students were involved in the effort working with established investigators to further their training, provide a nucleus of experienced practitioners in the new technologies during their industrial introduction, and utilize their talents for project goals. As will be seen in various portions of the report, some of the effort has led to commercialization. This process has spawned other efforts related to this project which are supported from outside sources. These activities are occupying the efforts of some of the people who were previously supported within this grant and its predecessors. The most advanced of the supported technologies is thermography, for which the previous joint efforts of the investigators and NASA researchers have developed several techniques for extending the utility of straight thermographic inspection by producing methods of interpretation and analysis accessible to automatic image processing with computer data analysis. The effort reported for this technology has been to introduce the techniques to new user communities, who are then be able to add to the effective uses of existing products with only slight development work. In a related development, analysis of a thermal measurement situation in past efforts led to a new insight into the behavior of simple temperature probes. This insight, previously reported to the narrow community in which the particular measurement was made, was reported to the community of generic temperature measurement experts this year. In addition to the propagation of mature thermographic techniques, the development of a thermoelastic imaging system has been an important related development. Part of the work carried out in the effort reported here has been to prepare reports introducing the newly commercially available thermoelastic measurements to the appropriate user communities.

  9. Modeling false positive detections in species occurrence data under different study designs.

    PubMed

    Chambert, Thierry; Miller, David A W; Nichols, James D

    2015-02-01

    The occurrence of false positive detections in presence-absence data, even when they occur infrequently, can lead to severe bias when estimating species occupancy patterns. Building upon previous efforts to account for this source of observational error, we established a general framework to model false positives in occupancy studies and extend existing modeling approaches to encompass a broader range of sampling designs. Specifically, we identified three common sampling designs that are likely to cover most scenarios encountered by researchers. The different designs all included ambiguous detections, as well as some known-truth data, but their modeling differed in the level of the model hierarchy at which the known-truth information was incorporated (site level or observation level). For each model, we provide the likelihood, as well as R and BUGS code needed for implementation. We also establish a clear terminology and provide guidance to help choosing the most appropriate design and modeling approach.

  10. Numerical tests of local scale invariance in ageing q-state Potts models

    NASA Astrophysics Data System (ADS)

    Lorenz, E.; Janke, W.

    2007-01-01

    Much effort has been spent over the last years to achieve a coherent theoretical description of ageing as a non-linear dynamics process. Long supposed to be a consequence of the slow dynamics of glassy systems only, ageing phenomena could also be identified in the phase-ordering kinetics of simple ferromagnets. As a phenomenological approach Henkel et al. developed a group of local scale transformations under which two-time autocorrelation and response functions should transform covariantly. This work is to extend previous numerical tests of the predicted scaling functions for the Ising model by Monte Carlo simulations of two-dimensional q-state Potts models with q=3 and 8, which, in equilibrium, undergo temperature-driven phase transitions of second and first order, respectively.

  11. Glass Property Models and Constraints for Estimating the Glass to be Produced at Hanford by Implementing Current Advanced Glass Formulation Efforts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.

    2013-07-01

    Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less

  12. Replicating Health Economic Models: Firm Foundations or a House of Cards?

    PubMed

    Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee

    2017-11-01

    Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.

  13. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  14. Aging in the colonial chordate, Botryllus schlosseri.

    PubMed

    Munday, Roma; Rodriguez, Delany; Di Maio, Alessandro; Kassmer, Susannah; Braden, Brian; Taketa, Daryl A; Langenbacher, Adam; De Tomaso, Anthony

    2015-01-30

    What mechanisms underlie aging? One theory, the wear-and-tear model, attributes aging to progressive deterioration in the molecular and cellular machinery which eventually lead to death through the disruption of physiological homeostasis. The second suggests that life span is genetically programmed, and aging may be derived from intrinsic processes which enforce a non-random, terminal time interval for the survivability of the organism. We are studying an organism that demonstrates both properties: the colonial ascidian, Botryllus schlosseri. Botryllus is a member of the Tunicata, the sister group to the vertebrates, and has a number of life history traits which make it an excellent model for studies on aging. First, Botryllus has a colonial life history, and grows by a process of asexual reproduction during which entire bodies, including all somatic and germline lineages, regenerate every week, resulting in a colony of genetically identical individuals. Second, previous studies of lifespan in genetically distinct Botryllus lineages suggest that a direct, heritable basis underlying mortality exists that is unlinked to reproductive effort and other life history traits. Here we will review recent efforts to take advantage of the unique life history traits of B. schlosseri and develop it into a robust model for aging research.

  15. Re-analysis of Alaskan benchmark glacier mass-balance data using the index method

    USGS Publications Warehouse

    Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.

    2010-01-01

    At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.

  16. Meter-Scale 3-D Models of the Martian Surface from Combining MOC and MOLA Data

    NASA Technical Reports Server (NTRS)

    Soderblom, Laurence A.; Kirk, Randolph L.

    2003-01-01

    We have extended our previous efforts to derive through controlled photoclinometry, accurate, calibrated, high-resolution topographic models of the martian surface. The process involves combining MGS MOLA topographic profiles and MGS MOC Narrow Angle images. The earlier work utilized, along with a particular MOC NA image, the MOLA topographic profile that was acquired simultaneously, in order to derive photometric and scattering properties of the surface and atmosphere so as to force the low spatial frequencies of a one-dimensional MOC photoclinometric model to match the MOLA profile. Both that work and the new results reported here depend heavily on successful efforts to: 1) refine the radiometric calibration of MOC NA; 2) register the MOC to MOLA coordinate systems and refine the pointing; and 3) provide the ability to project into a common coordinate system, simultaneously acquired MOC and MOLA with a single set of SPICE kernels utilizing the USGS ISIS cartographic image processing tools. The approach described in this paper extends the MOC-MOLA integration and cross-calibration procedures from one-dimensional profiles to full two-dimensional photoclinometry and image simulations. Included are methods to account for low-frequency albedo variations within the scene.

  17. Aging in the colonial chordate, Botryllus schlosseri

    PubMed Central

    Munday, Roma; Rodriguez, Delany; Di Maio, Alessandro; Kassmer, Susannah; Braden, Brian; Taketa, Daryl A.; Langenbacher, Adam; De Tomaso, Anthony

    2015-01-01

    What mechanisms underlie aging? One theory, the wear-and-tear model, attributes aging to progressive deterioration in the molecular and cellular machinery which eventually lead to death through the disruption of physiological homeostasis. The second suggests that life span is genetically programmed, and aging may be derived from intrinsic processes which enforce a non-random, terminal time interval for the survivability of the organism. We are studying an organism that demonstrates both properties: the colonial ascidian, Botryllus schlosseri. Botryllus is a member of the Tunicata, the sister group to the vertebrates, and has a number of life history traits which make it an excellent model for studies on aging. First, Botryllus has a colonial life history, and grows by a process of asexual reproduction during which entire bodies, including all somatic and germline lineages, regenerate every week, resulting in a colony of genetically identical individuals. Second, previous studies of lifespan in genetically distinct Botryllus lineages suggest that a direct, heritable basis underlying mortality exists that is unlinked to reproductive effort and other life history traits. Here we will review recent efforts to take advantage of the unique life history traits of B. schlosseri and develop it into a robust model for aging research. PMID:26136620

  18. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition . Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions . New approaches could avoid many current pitfalls (3a) New FM architectures, including model ]based approach integrated with NASA fs MBSE efforts (3b) NASA fs Office of the Chief Technologist: FM identified in seven of NASA fs 14 Space Technology Roadmaps . opportunity to coalesce and establish thrust area to progressively develop new FM techniques FM Handbook will help ensure that future missions do not encounter same FM ]related problems as previous missions Version 1 of the FM Handbook is a good start.

  19. Mechanical Characterization and Micromechanical Modeling of Woven Carbon/Copper Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Pindera, Marek-Jerzy; Ellis, David L.; Miner, Robert V.

    1997-01-01

    The present investigation examines the in-plane mechanical behavior of a particular woven metal matrix composite (MMC); 8-harness (8H) satin carbon/copper (C/Cu). This is accomplished via mechanical testing as well as micromechanical modeling. While the literature is replete with experimental and modeling efforts for woven and braided polymer matrix composites, little work has been done on woven and braided MMC's. Thus, the development and understanding of woven MMC's is at an early stage. 8H satin C/Cu owes its existence to the high thermal conductivity of copper and low density and thermal expansion of carbon fibers. It is a candidate material for high heat flux applications, such as space power radiator panels. The experimental portion of this investigation consists of monotonic and cyclic tension, compression, and Iosipescu shear tests, as well as combined tension-compression tests. Tests were performed on composite specimens with three copper matrix alloy types: pure Cu, Cu-0.5 weight percent Ti (Cu-Ti), and Cu-0.7 weight percent Cr (Cu-Cr). The small alloying additions are present to promote fiber/matrix interfacial bonding. The analytical modeling effort utilizes an approach in which a local micromechanical model is embedded in a global micromechanical model. This approach differs from previously developed analytical models for woven composites in that a true repeating unit cell is analyzed. However, unlike finite element modeling of woven composites, the geometry is sufficiently idealized to allow efficient geometric discretization and efficient execution.

  20. Developing the Precision Magnetic Field for the E989 Muon g{2 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Matthias W.

    The experimental value ofmore » $$(g\\hbox{--}2)_\\mu$$ historically has been and contemporarily remains an important probe into the Standard Model and proposed extensions. Previous measurements of $$(g\\hbox{--}2)_\\mu$$ exhibit a persistent statistical tension with calculations using the Standard Model implying that the theory may be incomplete and constraining possible extensions. The Fermilab Muon g-2 experiment, E989, endeavors to increase the precision over previous experiments by a factor of four and probe more deeply into the tension with the Standard Model. The $$(g\\hbox{--}2)_\\mu$$ experimental implementation measures two spin precession frequencies defined by the magnetic field, proton precession and muon precession. The value of $$(g\\hbox{--}2)_\\mu$$ is derived from a relationship between the two frequencies. The precision of magnetic field measurements and the overall magnetic field uniformity achieved over the muon storage volume are then two undeniably important aspects of the e xperiment in minimizing uncertainty. The current thesis details the methods employed to achieve magnetic field goals and results of the effort.« less

  1. Wake Vortex Advisory System (WakeVAS) Evaluation of Impacts on the National Airspace System

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Dollyhigh, Samuel M.

    2005-01-01

    This report is one of a series that describes an ongoing effort in high-fidelity modeling/simulation, evaluation and analysis of the benefits and performance metrics of the Wake Vortex Advisory System (WakeVAS) Concept of Operations being developed as part of the Virtual Airspace Modeling and Simulation (VAMS) project. A previous study, determined the overall increases in runway arrival rates that could be achieved at 12 selected airports due to WakeVAS reduced aircraft spacing under Instrument Meteorological Conditions. This study builds on the previous work to evaluate the NAS wide impacts of equipping various numbers of airports with WakeVAS. A queuing network model of the National Airspace System, built by the Logistics Management Institute, Mclean, VA, for NASA (LMINET) was used to estimate the reduction in delay that could be achieved by using WakeVAS under non-visual meteorological conditions for the projected air traffic demand in 2010. The results from LMINET were used to estimate the total annual delay reduction that could be achieved and from this, an estimate of the air carrier variable operating cost saving was made.

  2. Modeling of Low Feed-Through CD Mix Implosions

    NASA Astrophysics Data System (ADS)

    Pino, Jesse; MacLaren, Steven; Greenough, Jeff; Casey, Daniel; Dittrich, Tom; Kahn, Shahab; Kyrala, George; Ma, Tammy; Salmonson, Jay; Smalyuk, Vladimir; Tipton, Robert

    2015-11-01

    The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the National Ignition Facility. However, the previous implosions suffered from large instability growth seeded from perturbations on the outside of the capsule. Recently, the separated reactants technique has been applied to two platforms designed to minimize this feed-through and isolate local mix at the gas-ablator interface: the Two Shock (TS) and Adiabat-Shaped (AS) Platforms. Additionally, the background contamination of Deuterium in the gas has been greatly reduced, allowing for simultaneous observation of TT, DT, and DD neutrons, which respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations with both a Reynolds-Averaged Navier Stokes method and an enhanced diffusivity model. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674867.

  3. Do Effort and Reward at Work Predict Changes in Cognitive Function? First Longitudinal Results from the Representative German Socio-Economic Panel

    PubMed Central

    Riedel, Natalie; Siegrist, Johannes; Wege, Natalia; Loerbroks, Adrian; Angerer, Peter; Li, Jian

    2017-01-01

    It has been suggested that work characteristics, such as mental demands, job control, and occupational complexity, are prospectively related to cognitive function. However, current evidence on links between psychosocial working conditions and cognitive change over time is inconsistent. In this study, we applied the effort–reward imbalance model that allows to build on previous research on mental demands and to introduce reward-based learning as a principle with beneficial effect on cognitive function. We aimed to investigate whether high effort, high reward, and low over-commitment in 2006 were associated with positive changes in cognitive function in terms of perceptual speed and word fluency (2006–2012), and whether the co-manifestation of high effort and high reward would yield the strongest association. To this end, we used data on 1031 employees who participated in a large and representative study. Multivariate linear regression analyses supported our main hypotheses (separate and combined effects of effort and reward), particularly on changes in perceptual speed, whereas the effects of over-commitment did not reach the level of statistical significance. Our findings extend available knowledge by examining the course of cognitive function over time. If corroborated by further evidence, organization-based measures in the workplace can enrich efforts towards preventing cognitive decline in ageing workforces. PMID:29140258

  4. Evaluation of a Human Modeling Software Tool in the Prediction of Extra Vehicular Activity Tasks for an International Space Station Assembly Mission

    NASA Technical Reports Server (NTRS)

    Dischinger, H. Charles; Loughead, Tomas E.

    1997-01-01

    The difficulty of accomplishing work in extravehicular activity (EVA) is well documented. It arises as a result of motion constraints imposed by a pressurized spacesuit in a near-vacuum and of the frictionless environment induced in microgravity. The appropriate placement of foot restraints is crucial to ensuring that astronauts can remove and drive bolts, mate and demate connectors, and actuate levers. The location on structural members of the foot restraint sockets, to which the portable foot restraint is attached, must provide for an orientation of the restraint that affords the astronaut adequate visual and reach envelopes. Previously, the initial location of these sockets was dependent upon the experienced designer's ability to estimate placement. The design was tested in a simulated zero-gravity environment; spacesuited astronauts performed the tasks with mockups while submerged in water. Crew evaluation of the tasks based on these designs often indicated the bolt or other structure to which force needed to be applied was not within an acceptable work envelope, resulting in redesign. The development of improved methods for location of crew aids prior to testing would result in savings to the design effort for EVA hardware. Such an effort to streamline EVA design is especially relevant to International Space Station construction and maintenance. Assembly operations alone are expected to require in excess of four hundred hours of EVA. Thus, techniques which conserve design resources for assembly missions can have significant impact. We describe an effort to implement a human modelling application in the design effort for an International Space Station Assembly Mission. On Assembly Flight 6A, the Canadian-built Space Station Remote Manipulator System will be delivered to the U.S. Laboratory. It will be released from its launch restraints by astronauts in EVA. The design of the placement of foot restraint sockets was carried out using the human model Jack, and the modelling results were compared with actual underwater test results. The predicted locations of the sockets was found to be acceptable for 94% of the tasks attempted by the astronauts, This effort provides confidence in the capabilities of this package to accurately model tasks. It therefore increases assurance that the tool maybe used early in the design process.

  5. Bayesian hierarchical Poisson models with a hidden Markov structure for the detection of influenza epidemic outbreaks.

    PubMed

    Conesa, D; Martínez-Beneito, M A; Amorós, R; López-Quílez, A

    2015-04-01

    Considerable effort has been devoted to the development of statistical algorithms for the automated monitoring of influenza surveillance data. In this article, we introduce a framework of models for the early detection of the onset of an influenza epidemic which is applicable to different kinds of surveillance data. In particular, the process of the observed cases is modelled via a Bayesian Hierarchical Poisson model in which the intensity parameter is a function of the incidence rate. The key point is to consider this incidence rate as a normal distribution in which both parameters (mean and variance) are modelled differently, depending on whether the system is in an epidemic or non-epidemic phase. To do so, we propose a hidden Markov model in which the transition between both phases is modelled as a function of the epidemic state of the previous week. Different options for modelling the rates are described, including the option of modelling the mean at each phase as autoregressive processes of order 0, 1 or 2. Bayesian inference is carried out to provide the probability of being in an epidemic state at any given moment. The methodology is applied to various influenza data sets. The results indicate that our methods outperform previous approaches in terms of sensitivity, specificity and timeliness. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  6. Improved electron collisional line broadening for low-temperature ions and neutrals in plasma modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johns, H. M.; Kilcrease, D. P.; Colgan, J.

    In this study, electron collisional broadening of observed spectral lines depends on plasma electron temperature and density. Including this effect in models of measured spectra is necessary to determine plasma conditions; however, computational limits make accurate line broadening treatments difficult to implement in large-scale plasma modeling efforts. In this paper, we report on improvements to the treatment of electron collisional line broadening and illustrate this with calculations using the Los Alamos ATOMIC code. We implement the Dimitrijevic and Konjevic modified semi-empirical model Dimitrijevic and Konjevic (1986 Astron. and Astrophy. 163 297 and 1987 Astron. Astrophys. 172 345), which we amendmore » by employing oscillator strengths from Hartree–Fock calculations. This line broadening model applies to near-neutral plasmas with electron temperatures of Te ~ 1 eV and electron densities of N e ~10 17 cm -3. We evaluate the D.K.-inspired model against the previous hydrogenic approach in ATOMIC through comparison to NIST-rated measurements for selected neutral and singly-ionized Ca, O, Fe, and Sn lines using both fine-structure and configuration-averaged oscillator strengths. The new D.K.-inspired model is significantly more accurate than the previous hydrogenic model and we find the use of configuration-averaged oscillator strengths a good approximation for applications such as LIBS (laser induced breakdown spectroscopy), for which we demonstrate the use of the D.K.-inspired model.« less

  7. Improved electron collisional line broadening for low-temperature ions and neutrals in plasma modeling

    DOE PAGES

    Johns, H. M.; Kilcrease, D. P.; Colgan, J.; ...

    2015-09-29

    In this study, electron collisional broadening of observed spectral lines depends on plasma electron temperature and density. Including this effect in models of measured spectra is necessary to determine plasma conditions; however, computational limits make accurate line broadening treatments difficult to implement in large-scale plasma modeling efforts. In this paper, we report on improvements to the treatment of electron collisional line broadening and illustrate this with calculations using the Los Alamos ATOMIC code. We implement the Dimitrijevic and Konjevic modified semi-empirical model Dimitrijevic and Konjevic (1986 Astron. and Astrophy. 163 297 and 1987 Astron. Astrophys. 172 345), which we amendmore » by employing oscillator strengths from Hartree–Fock calculations. This line broadening model applies to near-neutral plasmas with electron temperatures of Te ~ 1 eV and electron densities of N e ~10 17 cm -3. We evaluate the D.K.-inspired model against the previous hydrogenic approach in ATOMIC through comparison to NIST-rated measurements for selected neutral and singly-ionized Ca, O, Fe, and Sn lines using both fine-structure and configuration-averaged oscillator strengths. The new D.K.-inspired model is significantly more accurate than the previous hydrogenic model and we find the use of configuration-averaged oscillator strengths a good approximation for applications such as LIBS (laser induced breakdown spectroscopy), for which we demonstrate the use of the D.K.-inspired model.« less

  8. Analysis of CRRES PHA Data for Low-Energy-Deposition Events

    NASA Technical Reports Server (NTRS)

    McNulty, P. J.; Hardage, Donna

    2004-01-01

    This effort analyzed the low-energy deposition Pulse Height Analyzer (PHA) data from the Combined Release and Radiation Effects Satellite (CRRES). The high-energy deposition data had been previously analyzed and shown to be in agreement with spallation reactions predicted by the Clemson University Proton Interactions in Devices (CUPID) simulation model and existing environmental and orbit positioning models (AP-8 with USAF B-L coordinates). The scope of this project was to develop and improve the CUPID model by increasing its range to lower incident particle energies, and to expand the modeling to include contributions from elastic interactions. Before making changes, it was necessary to identify experimental data suitable for benchmarking the codes; then, the models to the CRRES PHA data could be applied. It was also planned to test the model against available low-energy proton or neutron SEU data obtained with mono-energetic beams.

  9. The frequency response of dynamic friction: Enhanced rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cabboi, A.; Putelat, T.; Woodhouse, J.

    2016-07-01

    The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.

  10. The ASAC Air Carrier Investment Model (Third Generation)

    NASA Technical Reports Server (NTRS)

    Wingrove, Earl R., III; Gaier, Eric M.; Santmire, Tara E.

    1998-01-01

    To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. To accomplish this, NASA is building an Aviation System Analysis Capability (ASAC). The ASAC differs from previous NASA modeling efforts in that the economic behavior of buyers and sellers in the air transportation and aviation industries is central to its conception. To link the economics of flight with the technology of flight, ASAC requires a parametrically based model with extensions that link airline operations and investments in aircraft with aircraft characteristics. This model also must provide a mechanism for incorporating air travel demand and profitability factors into the airlines' investment decisions. Finally, the model must be flexible and capable of being incorporated into a wide-ranging suite of economic and technical models flat are envisioned for ASAC.

  11. Stratiform chromite deposit model

    USGS Publications Warehouse

    Schulte, Ruth F.; Taylor, Ryan D.; Piatak, Nadine M.; Seal, Robert R.

    2010-01-01

    Stratiform chromite deposits are of great economic importance, yet their origin and evolution remain highly debated. Layered igneous intrusions such as the Bushveld, Great Dyke, Kemi, and Stillwater Complexes, provide opportunities for studying magmatic differentiation processes and assimilation within the crust, as well as related ore-deposit formation. Chromite-rich seams within layered intrusions host the majority of the world's chromium reserves and may contain significant platinum-group-element (PGE) mineralization. This model of stratiform chromite deposits is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. The model focuses on features that may be common to all stratiform chromite deposits as a way to gain insight into the processes that gave rise to their emplacement and to the significant economic resources contained in them.

  12. Neural Mechanisms for Adaptive Learned Avoidance of Mental Effort.

    PubMed

    Mitsuto Nagase, Asako; Onoda, Keiichi; Clifford Foo, Jerome; Haji, Tomoki; Akaishi, Rei; Yamaguchi, Shuhei; Sakai, Katsuyuki; Morita, Kenji

    2018-02-05

    Humans tend to avoid mental effort. Previous studies have demonstrated this tendency using various demand-selection tasks; participants generally avoid options associated with higher cognitive demand. However, it remains unclear whether humans avoid mental effort adaptively in uncertain and non-stationary environments, and if so, what neural mechanisms underlie this learned avoidance and whether they remain the same irrespective of cognitive-demand types. We addressed these issues by developing novel demand-selection tasks where associations between choice options and cognitive-demand levels change over time, with two variations using mental arithmetic and spatial reasoning problems (29:4 and 18:2 males:females). Most participants showed avoidance, and their choices depended on the demand experienced on multiple preceding trials. We assumed that participants updated the expected cost of mental effort through experience, and fitted their choices by reinforcement learning models, comparing several possibilities. Model-based fMRI analyses revealed that activity in the dorsomedial and lateral frontal cortices was positively correlated with the trial-by-trial expected cost for the chosen option commonly across the different types of cognitive demand, and also revealed a trend of negative correlation in the ventromedial prefrontal cortex. We further identified correlates of cost-prediction-error at time of problem-presentation or answering the problem, the latter of which partially overlapped with or were proximal to the correlates of expected cost at time of choice-cue in the dorsomedial frontal cortex. These results suggest that humans adaptively learn to avoid mental effort, having neural mechanisms to represent expected cost and cost-prediction-error, and the same mechanisms operate for various types of cognitive demand. SIGNIFICANCE STATEMENT In daily life, humans encounter various cognitive demands, and tend to avoid high-demand options. However, it remains unclear whether humans avoid mental effort adaptively under dynamically changing environments, and if so, what are the underlying neural mechanisms and whether they operate irrespective of cognitive-demand types. To address these issues, we developed novel tasks, where participants could learn to avoid high-demand options under uncertain and non-stationary environments. Through model-based fMRI analyses, we found regions whose activity was correlated with the expected mental effort cost, or cost-prediction-error, regardless of demand-type, with overlap or adjacence in the dorsomedial frontal cortex. This finding contributes to clarifying the mechanisms for cognitive-demand avoidance, and provides empirical building blocks for the emerging computational theory of mental effort. Copyright © 2018 the authors.

  13. Assessing risk with increasingly stringent public health goals: the case of water lead and blood lead in children.

    PubMed

    Triantafyllidou, Simoni; Gallagher, Daniel; Edwards, Marc

    2014-03-01

    Previous predictions of children's blood lead levels (BLLs) through biokinetic models conclude that lead in tap water is not a primary health risk for a typical child under scenarios representative of chronic exposure, when applying a 10 μg/dL BLL of concern. Use of the US Environmental Protection Agency Integrated Exposure Uptake Biokinetic (IEUBK) model and of the International Commission on Radiological Protection (ICRP) biokinetic model to simulate children's exposure to water lead at home and at school was re-examined by expanding the scope of previous modeling efforts to consider new public health goals and improved methodology. Specifically, explicit consideration of the more sensitive population groups (e.g., young children and, particularly, formula-fed infants), the variability in BLLs amongst exposed individuals within those groups (e.g., more sensitive children at the upper tail of the BLL distribution), more conservative BLL reference values (e.g., 5 and 2 μg/dL versus 10 μg/dL) and concerns of acute exposure revealed situations where relatively low water lead levels were predicted to pose a human health concern.

  14. The timing and targeting of treatment in influenza pandemics influences the emergence of resistance in structured populations.

    PubMed

    Althouse, Benjamin M; Patterson-Lomba, Oscar; Goerg, Georg M; Hébert-Dufresne, Laurent

    2013-01-01

    Antiviral resistance in influenza is rampant and has the possibility of causing major morbidity and mortality. Previous models have identified treatment regimes to minimize total infections and keep resistance low. However, the bulk of these studies have ignored stochasticity and heterogeneous contact structures. Here we develop a network model of influenza transmission with treatment and resistance, and present both standard mean-field approximations as well as simulated dynamics. We find differences in the final epidemic sizes for identical transmission parameters (bistability) leading to different optimal treatment timing depending on the number initially infected. We also find, contrary to previous results, that treatment targeted by number of contacts per individual (node degree) gives rise to more resistance at lower levels of treatment than non-targeted treatment. Finally we highlight important differences between the two methods of analysis (mean-field versus stochastic simulations), and show where traditional mean-field approximations fail. Our results have important implications not only for the timing and distribution of influenza chemotherapy, but also for mathematical epidemiological modeling in general. Antiviral resistance in influenza may carry large consequences for pandemic mitigation efforts, and models ignoring contact heterogeneity and stochasticity may provide misleading policy recommendations.

  15. Gender Transformative Approaches to Engaging Men in Gender-Based Violence Prevention: A Review and Conceptual Model.

    PubMed

    Casey, Erin; Carlson, Juliana; Two Bulls, Sierra; Yager, Aurora

    2018-04-01

    Engaging men and boys as participants and stakeholders in gender-based violence (GBV) prevention initiatives is an increasingly institutionalized component of global efforts to end GBV. Accordingly, evidence of the impact of men's engagement endeavors is beginning to emerge, particularly regarding interventions aimed at fostering gender equitable and nonviolent attitudes and behaviors among men. This developing evidence base suggests that prevention programs with a "gender transformative" approach, or an explicit focus on questioning gender norms and expectations, show particular promise in achieving GBV prevention outcomes. Interventions targeting attitude and behavior change, however, represent just one kind of approach within a heterogeneous collection of prevention efforts around the globe, which can also include community mobilization, policy change, and social activism. The degree to which gender transformative principles inform this broader spectrum of men's engagement work is unclear. The goals of this article are twofold. First, we offer a conceptual model that captures and organizes a broader array of men's antiviolence activities in three distinct but interrelated domains: (1) initial outreach and recruitment of previously unengaged males, (2) interventions intended to promote gender-equitable attitudes and behavior among men, and (3) gender equity-related social action aimed at eradicating GBV, inclusive of all genders' contributions. Second, we review empirical literature in each of these domains. Across these two goals, we critically assess the degree to which gender transformative principles inform efforts within each domain, and we offer implications for the continuing conceptualization and assessment of efforts to increase men's participation in ending GBV.

  16. Nowcasting Intraseasonal Recreational Fishing Harvest with Internet Search Volume

    PubMed Central

    Carter, David W.; Crosson, Scott; Liese, Christopher

    2015-01-01

    Estimates of recreational fishing harvest are often unavailable until after a fishing season has ended. This lag in information complicates efforts to stay within the quota. The simplest way to monitor quota within the season is to use harvest information from the previous year. This works well when fishery conditions are stable, but is inaccurate when fishery conditions are changing. We develop regression-based models to “nowcast” intraseasonal recreational fishing harvest in the presence of changing fishery conditions. Our basic model accounts for seasonality, changes in the fishing season, and important events in the fishery. Our extended model uses Google Trends data on the internet search volume relevant to the fishery of interest. We demonstrate the model with the Gulf of Mexico red snapper fishery where the recreational sector has exceeded the quota nearly every year since 2007. Our results confirm that data for the previous year works well to predict intraseasonal harvest for a year (2012) where fishery conditions are consistent with historic patterns. However, for a year (2013) of unprecedented harvest and management activity our regression model using search volume for the term “red snapper season” generates intraseasonal nowcasts that are 27% more accurate than the basic model without the internet search information and 29% more accurate than the prediction based on the previous year. Reliable nowcasts of intraseasonal harvest could make in-season (or in-year) management feasible and increase the likelihood of staying within quota. Our nowcasting approach using internet search volume might have the potential to improve quota management in other fisheries where conditions change year-to-year. PMID:26348645

  17. Plant toxicity, adaptive herbivory, and plant community dynamics

    USGS Publications Warehouse

    Feng, Z.; Liu, R.; DeAngelis, D.L.; Bryant, J.P.; Kielland, K.; Stuart, Chapin F.; Swihart, R.K.

    2009-01-01

    We model effects of interspecific plant competition, herbivory, and a plant's toxic defenses against herbivores on vegetation dynamics. The model predicts that, when a generalist herbivore feeds in the absence of plant toxins, adaptive foraging generally increases the probability of coexistence of plant species populations, because the herbivore switches more of its effort to whichever plant species is more common and accessible. In contrast, toxin-determined selective herbivory can drive plant succession toward dominance by the more toxic species, as previously documented in boreal forests and prairies. When the toxin concentrations in different plant species are similar, but species have different toxins with nonadditive effects, herbivores tend to diversify foraging efforts to avoid high intakes of any one toxin. This diversification leads the herbivore to focus more feeding on the less common plant species. Thus, uncommon plants may experience depensatory mortality from herbivory, reducing local species diversity. The depensatory effect of herbivory may inhibit the invasion of other plant species that are more palatable or have different toxins. These predictions were tested and confirmed in the Alaskan boreal forest. ?? 2009 Springer Science+Business Media, LLC.

  18. Modeling and Simulation of the Second-Generation Orion Crew Module Air Bag Landing System

    NASA Technical Reports Server (NTRS)

    Timmers, Richard B.; Welch, Joseph V.; Hardy, Robin C.

    2009-01-01

    Air bags were evaluated as the landing attenuation system for earth landing of the Orion Crew Module (CM). An important element of the air bag system design process is proper modeling of the proposed configuration to determine if the resulting performance meets requirements. Analysis conducted to date shows that airbags are capable of providing a graceful landing of the CM in nominal and off-nominal conditions such as parachute failure, high horizontal winds, and unfavorable vehicle/ground angle combinations. The efforts presented here surround a second generation of the airbag design developed by ILC Dover, and is based on previous design, analysis, and testing efforts. In order to fully evaluate the second generation air bag design and correlate the dynamic simulations, a series of drop tests were carried out at NASA Langley's Landing and Impact Research (LandIR) facility. The tests consisted of a full-scale set of air bags attached to a full-scale test article representing the Orion Crew Module. The techniques used to collect experimental data, construct the simulations, and make comparisons to experimental data are discussed.

  19. Survey of the supporting research and technology for the thermal protection of the Galileo Probe

    NASA Technical Reports Server (NTRS)

    Howe, J. T.; Pitts, W. C.; Lundell, J. H.

    1981-01-01

    The Galileo Probe, which is scheduled to be launched in 1985 and to enter the hydrogen-helium atmosphere of Jupiter up to 1,475 days later, presents thermal protection problems that are far more difficult than those experienced in previous planetary entry missions. The high entry speed of the Probe will cause forebody heating rates orders of magnitude greater than those encountered in the Apollo and Pioneer Venus missions, severe afterbody heating from base-flow radiation, and thermochemical ablation rates for carbon phenolic that rival the free-stream mass flux. This paper presents a comprehensive survey of the experimental work and computational research that provide technological support for the Probe's heat-shield design effort. The survey includes atmospheric modeling; both approximate and first-principle computations of flow fields and heat-shield material response; base heating; turbulence modelling; new computational techniques; experimental heating and materials studies; code validation efforts; and a set of 'consensus' first-principle flow-field solutions through the entry maneuver, with predictions of the corresponding thermal protection requirements.

  20. Vadose Zone Transport Field Study: Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gee, Glendon W.; Ward, Anderson L.

    2001-11-30

    Studies were initiated at the Hanford Site to evaluate the process controlling the transport of fluids in the vadose zone and to develop a reliable database upon which vadose-zone transport models can be calibrated. These models are needed to evaluate contaminant migration through the vadose zone to underlying groundwaters at Hanford. A study site that had previously been extensively characterized using geophysical monitoring techniques was selected in the 200 E Area. Techniques used previously included neutron probe for water content, spectral gamma logging for radionuclide tracers, and gamma scattering for wet bulk density. Building on the characterization efforts of themore » past 20 years, the site was instrumented to facilitate the comparison of nine vadose-zone characterization methods: advanced tensiometers, neutron probe, electrical resistance tomography (ERT), high-resolution resistivity (HRR), electromagnetic induction imaging (EMI), cross-borehole radar (XBR), and cross-borehole seismic (XBS). Soil coring was used to obtain soil samples for analyzing ionic and isotopic tracers.« less

  1. Analysis of Over 10,000 Cases Finds No Association between Previously-Reported Candidate Polymorphisms and Ovarian Cancer Outcome

    PubMed Central

    White, Kristin L.; Vierkant, Robert A.; Fogarty, Zachary C.; Charbonneau, Bridget; Block, Matthew S.; Pharoah, Paul D.P.; Chenevix-Trench, Georgia; Rossing, Mary Anne; Cramer, Daniel W.; Pearce, C. Leigh; Schildkraut, Joellen M.; Menon, Usha; Kjaer, Susanne Kruger; Levine, Douglas A.; Gronwald, Jacek; Culver, Hoda Anton; Whittemore, Alice S.; Karlan, Beth Y.; Lambrechts, Diether; Wentzensen, Nicolas; Kupryjanczyk, Jolanta; Chang-Claude, Jenny; Bandera, Elisa V.; Hogdall, Estrid; Heitz, Florian; Kaye, Stanley B.; Fasching, Peter A.; Campbell, Ian; Goodman, Marc T.; Pejovic, Tanja; Bean, Yukie; Lurie, Galina; Eccles, Diana; Hein, Alexander; Beckmann, Matthias W.; Ekici, Arif B.; Paul, James; Brown, Robert; Flanagan, James; Harter, Philipp; du Bois, Andreas; Schwaab, Ira; Hogdall, Claus K.; Lundvall, Lene; Olson, Sara H.; Orlow, Irene; Paddock, Lisa E.; Rudolph, Anja; Eilber, Ursula; Dansonka-Mieszkowska, Agnieszka; Rzepecka, Iwona K.; Ziolkowska-Seta, Izabela; Brinton, Louise; Yang, Hannah; Garcia-Closas, Montserrat; Despierre, Evelyn; Lambrechts, Sandrina; Vergote, Ignace; Walsh, Christine; Lester, Jenny; Sieh, Weiva; McGuire, Valerie; Rothstein, Joseph H.; Ziogas, Argyrios; Lubiński, Jan; Cybulski, Cezary; Menkiszak, Janusz; Jensen, Allan; Gayther, Simon A.; Ramus, Susan J.; Gentry-Maharaj, Aleksandra; Berchuck, Andrew; Wu, Anna H.; Pike, Malcolm C.; Van Den Berg, David; Terry, Kathryn L.; Vitonis, Allison F.; Doherty, Jennifer A.; Johnatty, Sharon; deFazio, Anna; Song, Honglin; Tyrer, Jonathan; Sellers, Thomas A.; Phelan, Catherine M.; Kalli, Kimberly R.; Cunningham, Julie M.; Fridley, Brooke L.; Goode, Ellen L.

    2013-01-01

    Background Ovarian cancer is a leading cause of cancer-related death among women. In an effort to understand contributors to disease outcome, we evaluated single-nucleotide polymorphisms (SNPs) previously associated with ovarian cancer recurrence or survival, specifically in angiogenesis, inflammation, mitosis, and drug disposition genes. Methods Twenty-seven SNPs in VHL, HGF, IL18, PRKACB, ABCB1, CYP2C8, ERCC2, and ERCC1 previously associated with ovarian cancer outcome were genotyped in 10,084 invasive cases from 28 studies from the Ovarian Cancer Association Consortium with over 37,000 observed person-years and 4,478 deaths. Cox proportional hazards models were used to examine the association between candidate SNPs and ovarian cancer recurrence or survival with and without adjustment for key covariates. Results We observed no association between genotype and ovarian cancer recurrence or survival for any of the SNPs examined. Conclusions These results refute prior associations between these SNPs and ovarian cancer outcome and underscore the importance of maximally powered genetic association studies. Impact These variants should not be used in prognostic models. Alternate approaches to uncovering inherited prognostic factors, if they exist, are needed. PMID:23513043

  2. Accounting for groundwater in stream fish thermal habitat responses to climate change

    USGS Publications Warehouse

    Snyder, Craig D.; Hitt, Nathaniel P.; Young, John A.

    2015-01-01

    Forecasting climate change effects on aquatic fauna and their habitat requires an understanding of how water temperature responds to changing air temperature (i.e., thermal sensitivity). Previous efforts to forecast climate effects on brook trout habitat have generally assumed uniform air-water temperature relationships over large areas that cannot account for groundwater inputs and other processes that operate at finer spatial scales. We developed regression models that accounted for groundwater influences on thermal sensitivity from measured air-water temperature relationships within forested watersheds in eastern North America (Shenandoah National Park, USA, 78 sites in 9 watersheds). We used these reach-scale models to forecast climate change effects on stream temperature and brook trout thermal habitat, and compared our results to previous forecasts based upon large-scale models. Observed stream temperatures were generally less sensitive to air temperature than previously assumed, and we attribute this to the moderating effect of shallow groundwater inputs. Predicted groundwater temperatures from air-water regression models corresponded well to observed groundwater temperatures elsewhere in the study area. Predictions of brook trout future habitat loss derived from our fine-grained models were far less pessimistic than those from prior models developed at coarser spatial resolutions. However, our models also revealed spatial variation in thermal sensitivity within and among catchments resulting in a patchy distribution of thermally suitable habitat. Habitat fragmentation due to thermal barriers therefore may have an increasingly important role for trout population viability in headwater streams. Our results demonstrate that simple adjustments to air-water temperature regression models can provide a powerful and cost-effective approach for predicting future stream temperatures while accounting for effects of groundwater.

  3. Unsteady Three-Dimensional Simulation of a Shear Coaxial GO2/GH2 Rocket Injector with RANS and Hybrid-RAN-LES/DES Using Flamelet Models

    NASA Technical Reports Server (NTRS)

    Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.

    2015-01-01

    Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.

  4. NACP Synthesis: Evaluating modeled carbon state and flux variables against multiple observational constraints (Invited)

    NASA Astrophysics Data System (ADS)

    Thornton, P. E.; Nacp Site Synthesis Participants

    2010-12-01

    The North American Carbon Program (NACP) synthesis effort includes an extensive intercomparison of modeled and observed ecosystem states and fluxes preformed with multiple models across multiple sites. The participating models span a range of complexity and intended application, while the participating sites cover a broad range of natural and managed ecosystems in North America, from the subtropics to arctic tundra, and coastal to interior climates. A unique characteristic of this collaborative effort is that multiple independent observations are available at all sites: fluxes are measured with the eddy covariance technique, and standard biometric and field sampling methods provide estimates of standing stock and annual production in multiple categories. In addition, multiple modeling approaches are employed to make predictions at each site, varying, for example, in the use of diagnostic vs. prognostic leaf area index. Given multiple independent observational constraints and multiple classes of model, we evaluate the internal consistency of observations at each site, and use this information to extend previously derived estimates of uncertainty in the flux observations. Model results are then compared with all available observations and models are ranked according to their consistency with each type of observation (high frequency flux measurement, carbon stock, annual production). We demonstrate a range of internal consistency across the sites, and show that some models which perform well against one observational metric perform poorly against others. We use this analysis to construct a hypothesis for combining eddy covariance, biometrics, and other standard physiological and ecological measurements which, as data collection proceeded over several years, would present an increasingly challenging target for next generation models.

  5. Development of a Turbofan Engine Simulation in a Graphical Simulation Environment

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Guo, Ten-Heui

    2003-01-01

    This paper presents the development of a generic component level model of a turbofan engine simulation with a digital controller, in an advanced graphical simulation environment. The goal of this effort is to develop and demonstrate a flexible simulation platform for future research in propulsion system control and diagnostic technology. A previously validated FORTRAN-based model of a modern, high-performance, military-type turbofan engine is being used to validate the platform development. The implementation process required the development of various innovative procedures, which are discussed in the paper. Open-loop and closed-loop comparisons are made between the two simulations. Future enhancements that are to be made to the modular engine simulation are summarized.

  6. A Unified Model of Geostrophic Adjustment and Frontogenesis

    NASA Astrophysics Data System (ADS)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  7. A Web-based Tool for Transparent, Collaborative Urban Water System Planning for Monterrey, Mexico

    NASA Astrophysics Data System (ADS)

    Rheinheimer, D. E.; Medellin-Azuara, J.; Garza Díaz, L. E.; Ramírez, A. I.

    2017-12-01

    Recent rapid advances in web technologies and cloud computing show great promise for facilitating collaboration and transparency in water planning efforts. Water resources planning is increasingly in the context of a rapidly urbanizing world, particularly in developing countries. In such countries with democratic traditions, the degree of transparency and collaboration in water planning can mean the difference between success and failure of water planning efforts. This is exemplified in the city of Monterrey, Mexico, where an effort to build a new long-distance aqueduct to increase water supply to the city dramatically failed due to lack of transparency and top-down planning. To help address, we used a new, web-based water system modeling platform, called OpenAgua, to develop a prototype decision support system for water planning in Monterrey. OpenAgua is designed to promote transparency and collaboration, as well as provide strong, cloud-based, water system modeling capabilities. We developed and assessed five water management options intended to increase water supply yield and/or reliability, a dominant water management concern in Latin America generally: 1) a new long-distance source (the previously-rejected project), 2) a new nearby reservoir, 3) expansion/re-operation of an existing major canal, 4) desalination, and 5) industrial water reuse. Using the integrated modeling and analytic capabilities of OpenAgua, and some customization, we assessed the performance of these options for water supply yield and reliability to help identify the most promising ones. In presenting this assessment, we demonstrate the viability of using online, cloud-based modeling systems for improving transparency and collaboration in decision making, reducing the gap between citizens, policy makers and water managers, and future directions.

  8. Integration of Dust Prediction Systems and Vegetation Phenology to Track Pollen for Asthma Alerts in Public Health

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey; Sprigg, William; Huete, Alfredo; Levetin, Estelle; VandeWater, Peter; Nickovic, Slobodan; Pejanovic, Goran; Budge, Amelia; Heidi Krapfl; Myers, Orrin; hide

    2009-01-01

    Initial efforts to develop a deterministic model for predicting and simulating pollen release and downwind concentration to study dependencies of phenology on meteorology will be discussed. The development of a real-time, rapid response pollen release and transport system as a component of the New Mexico Environmental Public Health Tracking System (EPHTS), is based on meteorological models, NASA Earth science results (ESR), and an in-situ network of phenology cameras. The plan is to detect pollen release verified using ground based atmospheric pollen sampling within a few hours using daily MODIS daa in nearly real-time from Direct Broadcast, similar to the MODIS Rapid Response System for fire detection. As MODIS winds down, the NPOESS-VIIRS sensor will assume daily vegetation monitoring tasks. Also, advancements in geostationary satellites will allow 1km vegetation indices at 15-30 minute intervals. The pollen module in EPHTS will be used to: (1) support public health decisions for asthma and allergy alerts in New Mexico, Texas and Oklahoma; (2) augment the Centers for Disease Control and Prevention (CDC)'s Environmental Public Health Tracking Network (EPHTN); and (3) extend surveillance services to local healthcare providers subscribing to the Syndrome Reporting Information System (SYRIS). Previous studies in NASA's public health applications portfolios provide the infrastructure for this effort. The team is confident that NASA and NOAA ESR data, combined into a verified and validated dust model will yield groundbreaking results using the modified dust model to transport pollen. The growing ESR/health infrastructure is based on results from a rapid prototype scoping effort for pollen detection and simulation carried out by the principal investigators.

  9. Data reconstruction can improve abundance index estimation: An example using Taiwanese longline data for Pacific bluefin tuna

    PubMed Central

    Fukuda, Hiromu; Maunder, Mark N.

    2017-01-01

    Catch-per-unit-effort (CPUE) is often the main piece of information used in fisheries stock assessment; however, the catch and effort data that are traditionally compiled from commercial logbooks can be incomplete or unreliable due to many reasons. Pacific bluefin tuna (PBF) is a seasonal target species in the Taiwanese longline fishery. Since 2010, detailed catch information for each PBF has been made available through a catch documentation scheme. However, previously, only market landing data with a low coverage of logbooks were available. Therefore, several nontraditional procedures were performed to reconstruct catch and effort data from many alternative data sources not directly obtained from fishers for 2001–2015: (1) Estimating the catch number from the landing weight for 2001–2003, for which the catch number information was incomplete, based on Monte Carlo simulation; (2) deriving fishing days for 2007–2009 from voyage data recorder data, based on a newly developed algorithm; and (3) deriving fishing days for 2001–2006 from vessel trip information, based on linear relationships between fishing and at-sea days. Subsequently, generalized linear mixed models were developed with the delta-lognormal assumption for standardizing the CPUE calculated from the reconstructed data, and three-stage model evaluation was performed using (1) Akaike and Bayesian information criteria to determine the most favorable variable composition of standardization models, (2) overall R2 via cross-validation to compare fitting performance between area-separated and area-combined standardizations, and (3) system-based testing to explore the consistency of the standardized CPUEs with auxiliary data in the PBF stock assessment model. The last stage of evaluation revealed high consistency among the data, thus demonstrating improvements in data reconstruction for estimating the abundance index, and consequently the stock assessment. PMID:28968434

  10. Modeling the Health and Economic Burden of Hepatitis C Virus in Switzerland

    PubMed Central

    Müllhaupt, Beat; Bruggmann, Philip; Bihl, Florian; Blach, Sarah; Lavanchy, Daniel; Razavi, Homie; Semela, David; Negro, Francesco

    2015-01-01

    Background Chronic hepatitis C virus infection is a major cause of liver disease in Switzerland and carries a significant cost burden. Currently, only conservative strategies are in place to mitigate the burden of hepatitis C in Switzerland. This study expands on previously described modeling efforts to explore the impact of: no treatment, and treatment to reduce HCC and mortality. Furthermore, the costs associated with untreated HCV were modeled. Methods Hepatitis C disease progression and mortality were modeled. Baseline historical assumptions were collected from the literature and expert interviews and strategies were developed to show the impact of different levels of intervention (improved drug cure rates, treatment and diagnosis) until 2030. Results Under the historical standard of care, the number of advanced stage cases was projected to increase until 2030, at which point the annual economic burden of untreated viremic infections was projected to reach €96.8 (95% Uncertainty Interval: €36 – €232) million. Scenarios to reduce HCV liver-related mortality by 90% by 2030 required treatment of 4,190 ≥F2 or 3,200 ≥F3 patients annually by 2018 using antivirals with a 95% efficacy rate. Delaying the implementation of these scenarios by 2 or 5 years reduced the impact on mortality to 75% and 57%, respectively. Conclusions With today’s treatment efficacy and uptake rates, hepatitis C disease burden is expected to increase through 2030. A substantial reduction in disease burden can be achieved by means of both higher efficacy drugs and increased treatment uptake. However, these efforts cannot be undertaken without a simultaneous effort to diagnose more infections. PMID:26107467

  11. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.

    1993-01-01

    The main focus was the development, implementation, and calibration of methods for performing molecular electronic structure calculations to high accuracy. These various methods were then applied to a number of chemical reactions and species of interest to NASA, notably in the area of combustion chemistry. Among the development work undertaken was a collaborative effort to develop a program to efficiently predict molecular structures and vibrational frequencies using energy derivatives. Another major development effort involved the design of new atomic basis sets for use in chemical studies: these sets were considerably more accurate than those previously in use. Much effort was also devoted to calibrating methods for computing accurate molecular wave functions, including the first reliable calibrations for realistic molecules using full CI results. A wide variety of application calculations were undertaken. One area of interest was the spectroscopy and thermochemistry of small molecules, including establishing small molecule binding energies to an accuracy rivaling, or even on occasion surpassing, the experiment. Such binding energies are essential input to modeling chemical reaction processes, such as combustion. Studies of large molecules and processes important in both hydrogen and hydrocarbon combustion chemistry were also carried out. Finally, some effort was devoted to the structure and spectroscopy of small metal clusters, with applications to materials science problems.

  12. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    PubMed

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  13. 3MdB: the Mexican Million Models database

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  14. Modeling and Composing Scenario-Based Requirements with Aspects

    NASA Technical Reports Server (NTRS)

    Araujo, Joao; Whittle, Jon; Ki, Dae-Kyoo

    2004-01-01

    There has been significant recent interest, within the Aspect-Oriented Software Development (AOSD) community, in representing crosscutting concerns at various stages of the software lifecycle. However, most of these efforts have concentrated on the design and implementation phases. We focus in this paper on representing aspects during use case modeling. In particular, we focus on scenario-based requirements and show how to compose aspectual and non-aspectual scenarios so that they can be simulated as a whole. Non-aspectual scenarios are modeled as UML sequence diagram. Aspectual scenarios are modeled as Interaction Pattern Specifications (IPS). In order to simulate them, the scenarios are transformed into a set of executable state machines using an existing state machine synthesis algorithm. Previous work composed aspectual and non-aspectual scenarios at the sequence diagram level. In this paper, the composition is done at the state machine level.

  15. A Modeling Approach to Fiber Fracture in Melt Impregnation

    NASA Astrophysics Data System (ADS)

    Ren, Feng; Zhang, Cong; Yu, Yang; Xin, Chunling; Tang, Ke; He, Yadong

    2017-02-01

    The effect of process variables such as roving pulling speed, melt temperature and number of pins on the fiber fracture during the processing of thermoplastic based composites was investigated in this study. The melt impregnation was used in this process of continuous glass fiber reinforced thermoplastic composites. Previous investigators have suggested a variety of models for melt impregnation, while comparatively little effort has been spent on modeling the fiber fracture caused by the viscous resin. Herein, a mathematical model was developed for impregnation process to predict the fiber fracture rate and describe the experimental results with the Weibull intensity distribution function. The optimal parameters of this process were obtained by orthogonal experiment. The results suggest that the fiber fracture is caused by viscous shear stress on fiber bundle in melt impregnation mold when pulling the fiber bundle.

  16. Application of real rock pore-threat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakibul, M.; Sarker, H.; McIntyre, D.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data.« less

  17. Latitudinal distributions of particulate carbon export across the North Western Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Puigcorbé, Viena; Roca-Martí, Montserrat; Masqué, Pere; Benitez-Nelson, Claudia; Rutgers van der Loeff, Michiel; Bracher, Astrid; Moreau, Sebastien

    2017-11-01

    234Th-derived carbon export fluxes were measured in the Atlantic Ocean under the GEOTRACES framework to evaluate basin-scale export variability. Here, we present the results from the northern half of the GA02 transect, spanning from the equator to 64°N. As a result of limited site-specific C/234Th ratio measurements, we further combined our data with previous work to develop a basin wide C/234Th ratio depth curve. While the magnitude of organic carbon fluxes varied depending on the C/234Th ratio used, latitudinal trends were similar, with sizeable and variable organic carbon export fluxes occurring at high latitudes and low to negligible fluxes occurring in oligotrophic waters. Our results agree with previous studies, except at the boundaries between domains, where fluxes were relatively enhanced. Three different models were used to obtain satellite-derived net primary production (NPP). In general, NPP estimates had similar trends along the transect, but there were significant differences in the absolute magnitude depending on the model used. Nevertheless, organic carbon export efficiencies were generally < 25%, with the exception of a few stations located in the transition area between the riverine and the oligotrophic domains and between the oligotrophic and the temperate domains. Satellite-derived organic carbon export models from Dunne et al. (2005) (D05), Laws et al. (2011) (L11) and Henson et al. (2011) (H11) were also compared to our 234Th-derived carbon exports fluxes. D05 and L11 provided estimates closest to values obtained with the 234Th approach (within a 3-fold difference), but with no clear trends. The H11 model, on the other hand, consistently provided lower export estimates. The large increase in export data in the Atlantic Ocean derived from the GEOTRACES Program, combined with satellite observations and modeling efforts continue to improve the estimates of carbon export in this ocean basin and therefore reduce uncertainty in the global carbon budget. However, our results also suggest that tuning export models and including biological parameters at a regional scale is necessary for improving satellite-modeling efforts and providing export estimates that are more representative of in situ observations.

  18. Application of real rock pore-throat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, M.R.; McIntyre, D.; Ferer, M.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data. Introduction« less

  19. Scaffold hopping: exploration of acetanilide-containing uracil analogues as potential NNRTIs.

    PubMed

    Babkov, Denis A; Valuev-Elliston, Vladimir T; Paramonova, Maria P; Ozerov, Alexander A; Ivanov, Alexander V; Chizhov, Alexander O; Khandazhinskaya, Anastasia L; Kochetkov, Sergey N; Balzarini, Jan; Daelemans, Dirk; Pannecouque, Christophe; Seley-Radtke, Katherine L; Novikov, Mikhail S

    2015-03-01

    In order to identify novel nonnucleoside inhibitors of HIV-1 reverse transcriptase two series of amide-containing uracil derivatives were designed as hybrids of two scaffolds of previously reported inhibitors. Subsequent biological evaluation confirmed acetamide uracil derivatives 15a-k as selective micromolar NNRTIs with a first generation-like resistance profile. Molecular modeling of the most active compounds 15c and 15i was employed to provide insight on their inhibitory properties and direct future design efforts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Juvenile honest food solicitation and parental investment as a life history strategy: A kin demographic selection model.

    PubMed

    Garay, József; Csiszár, Villő; Móri, Tamás F; Szilágyi, András; Varga, Zoltán; Számadó, Szabolcs

    2018-01-01

    Parent-offspring communication remains an unresolved challenge for biologist. The difficulty of the challenge comes from the fact that it is a multifaceted problem with connections to life-history evolution, parent-offspring conflict, kin selection and signalling. Previous efforts mainly focused on modelling resource allocation at the expense of the dynamic interaction during a reproductive season. Here we present a two-stage model of begging where the first stage models the interaction between nestlings and parents within a nest and the second stage models the life-history trade-offs. We show in an asexual population that honest begging results in decreased variance of collected food between siblings, which leads to mean number of surviving offspring. Thus, honest begging can be seen as a special bet-hedging against informational uncertainty, which not just decreases the variance of fitness but also increases the arithmetic mean.

  1. Juvenile honest food solicitation and parental investment as a life history strategy: A kin demographic selection model

    PubMed Central

    Szilágyi, András; Varga, Zoltán

    2018-01-01

    Parent-offspring communication remains an unresolved challenge for biologist. The difficulty of the challenge comes from the fact that it is a multifaceted problem with connections to life-history evolution, parent-offspring conflict, kin selection and signalling. Previous efforts mainly focused on modelling resource allocation at the expense of the dynamic interaction during a reproductive season. Here we present a two-stage model of begging where the first stage models the interaction between nestlings and parents within a nest and the second stage models the life-history trade-offs. We show in an asexual population that honest begging results in decreased variance of collected food between siblings, which leads to mean number of surviving offspring. Thus, honest begging can be seen as a special bet-hedging against informational uncertainty, which not just decreases the variance of fitness but also increases the arithmetic mean. PMID:29494630

  2. A coarse-grained DNA model for the prediction of current signals in DNA translocation experiments

    NASA Astrophysics Data System (ADS)

    Weik, Florian; Kesselheim, Stefan; Holm, Christian

    2016-11-01

    We present an implicit solvent coarse-grained double-stranded DNA (dsDNA) model confined to an infinite cylindrical pore that reproduces the experimentally observed current modulations of a KaCl solution at various concentrations. Our model extends previous coarse-grained and mean-field approaches by incorporating a position dependent friction term on the ions, which Kesselheim et al. [Phys. Rev. Lett. 112, 018101 (2014)] identified as an essential ingredient to correctly reproduce the experimental data of Smeets et al. [Nano Lett. 6, 89 (2006)]. Our approach reduces the computational effort by orders of magnitude compared with all-atom simulations and serves as a promising starting point for modeling the entire translocation process of dsDNA. We achieve a consistent description of the system's electrokinetics by using explicitly parameterized ions, a friction term between the DNA beads and the ions, and a lattice-Boltzmann model for the solvent.

  3. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  4. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang

    2009-10-05

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition modelsmore » were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.« less

  5. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  6. Outer satellite atmospheres: Their nature and planetary interactions

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Combi, M. R.

    1982-01-01

    Significant progress is reported in early modeling analysis of observed sodium cloud images with our new model which includes the oscillating Io plasma torus ionization sink. Both the general w-D morphology of the region B cloud as well as the large spatial gradient seen between the region A and B clouds are found to be consistent with an isotropic flux of sodium atoms from Io. Model analysis of the spatially extended high velocity directional features provided substantial evidence for a magnetospheric wind driven gas escape mechanism from Io. In our efforts to define the source(s) of hydrogen atoms in the Saturn system, major steps were taken in order to understand the role of Titan. We have completed the comparison of the Voyager UVS data with previous Titan model results, as well as the update of the old model computer code to handle the spatially varying ionization sink for H atoms.

  7. UCERF3: A new earthquake forecast for California's complex fault system

    USGS Publications Warehouse

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  8. Tools for Local and Distributed Climate Data Access

    NASA Astrophysics Data System (ADS)

    Schweitzer, R.; O'Brien, K.; Burger, E. F.; Smith, K. M.; Manke, A. B.; Radhakrishnan, A.; Balaji, V.

    2017-12-01

    Last year we reported on our efforts to adapt existing tools to facilitate model development. During the lifecycle of a Climate Model Intercomparison Project (CMIP), data must be quality controlled before it can be published and studied. Like previous efforts, the next CMIP6 will produce an unprecedented volume of data. For an institution, modelling group or modeller the volume of data is unmanageable without tools that organize and automate as many processes as possible. Even if a modelling group has tools for data and metadata management, it often falls on individuals to do the initial quality assessment for a model run with bespoke tools. Using individually crafted tools can lead to interruptions when project personnel change and may result in inconsistencies and duplication of effort across groups. This talk will expand on our experiences using available tools (Ferret/PyFerret, the Live Access Server, the GFDL Curator, the GFDL Model Development Database Interface and the THREDDS Data Server) to seamlessly automate the data assembly process to give users "one-click" access to a rich suite of Web-based analysis and comparison tools. On the surface, it appears that this collection of tools is well suited to the task, but our experience of the last year taught us that the data volume and distributed storage adds a number of challenges in adapting the tools for this task. Quality control and initial evaluation add their own set of challenges. We will discuss how we addressed the needs of QC researchers by expanding standard tools to include specialized plots and leveraged the configurability of the tools to add specific user defined analysis operations so they are available to everyone using the system. We also report on our efforts to overcome some of the technical barriers for wide adoption of the tools by providing pre-built containers that are easily deployed in virtual machine and cloud environments. Finally, we will offer some suggestions for added features, configuration options and improved robustness that can make future implementation of similar systems operate faster and more reliably. Solving these challenges for data sets distributed narrowly across networks and storage systems of points the way to solving similar problems associated with sharing data distributed across institutions continents.

  9. The New NASA Orbital Debris Engineering Model ORDEM2000

    NASA Technical Reports Server (NTRS)

    Liou, Jer-Chyi; Matney, Mark J.; Anz-Meador, Phillip D.; Kessler, Donald; Jansen, Mark; Theall, Jeffery R.

    2002-01-01

    The NASA Orbital Debris Program Office at Johnson Space Center has developed a new computer-based orbital debris engineering model, ORDEM2000, which describes the orbital debris environment in the low Earth orbit region between 200 and 2000 km altitude. The model is appropriate for those engineering solutions requiring knowledge and estimates of the orbital debris environment (debris spatial density, flux, etc.). ORDEM2000 can also be used as a benchmark for ground-based debris measurements and observations. We incorporated a large set of observational data, covering the object size range from 10 mm to 10 m, into the ORDEM2000 debris database, utilizing a maximum likelihood estimator to convert observations into debris population probability distribution functions. These functions then form the basis of debris populations. We developed a finite element model to process the debris populations to form the debris environment. A more capable input and output structure and a user-friendly graphical user interface are also implemented in the model. ORDEM2000 has been subjected to a significant verification and validation effort. This document describes ORDEM2000, which supersedes the previous model, ORDEM96. The availability of new sensor and in situ data, as well as new analytical techniques, has enabled the construction of this new model. Section 1 describes the general requirements and scope of an engineering model. Data analyses and the theoretical formulation of the model are described in Sections 2 and 3. Section 4 describes the verification and validation effort and the sensitivity and uncertainty analyses. Finally, Section 5 describes the graphical user interface, software installation, and test cases for the user.

  10. Reconstruction and Validation of a Genome-Scale Metabolic Model for the Filamentous Fungus Neurospora crassa Using FARM

    PubMed Central

    Hood, Heather M.; Ocasio, Linda R.; Sachs, Matthew S.; Galagan, James E.

    2013-01-01

    The filamentous fungus Neurospora crassa played a central role in the development of twentieth-century genetics, biochemistry and molecular biology, and continues to serve as a model organism for eukaryotic biology. Here, we have reconstructed a genome-scale model of its metabolism. This model consists of 836 metabolic genes, 257 pathways, 6 cellular compartments, and is supported by extensive manual curation of 491 literature citations. To aid our reconstruction, we developed three optimization-based algorithms, which together comprise Fast Automated Reconstruction of Metabolism (FARM). These algorithms are: LInear MEtabolite Dilution Flux Balance Analysis (limed-FBA), which predicts flux while linearly accounting for metabolite dilution; One-step functional Pruning (OnePrune), which removes blocked reactions with a single compact linear program; and Consistent Reproduction Of growth/no-growth Phenotype (CROP), which reconciles differences between in silico and experimental gene essentiality faster than previous approaches. Against an independent test set of more than 300 essential/non-essential genes that were not used to train the model, the model displays 93% sensitivity and specificity. We also used the model to simulate the biochemical genetics experiments originally performed on Neurospora by comprehensively predicting nutrient rescue of essential genes and synthetic lethal interactions, and we provide detailed pathway-based mechanistic explanations of our predictions. Our model provides a reliable computational framework for the integration and interpretation of ongoing experimental efforts in Neurospora, and we anticipate that our methods will substantially reduce the manual effort required to develop high-quality genome-scale metabolic models for other organisms. PMID:23935467

  11. Isaac Newton and the astronomical refraction.

    PubMed

    Lehn, Waldemar H

    2008-12-01

    In a short interval toward the end of 1694, Isaac Newton developed two mathematical models for the theory of the astronomical refraction and calculated two refraction tables, but did not publish his theory. Much effort has been expended, starting with Biot in 1836, in the attempt to identify the methods and equations that Newton used. In contrast to previous work, a closed form solution is identified for the refraction integral that reproduces the table for his first model (in which density decays linearly with elevation). The parameters of his second model, which includes the exponential variation of pressure in an isothermal atmosphere, have also been identified by reproducing his results. The implication is clear that in each case Newton had derived exactly the correct equations for the astronomical refraction; furthermore, he was the first to do so.

  12. Internal ballistics model update for ASRM dome

    NASA Technical Reports Server (NTRS)

    Bowden, Mark H.; Jenkins, Billy Z.

    1991-01-01

    A previous report (no. 5-32279, contract NAS8-36955, DO 51) describes the measures taken to adapt the NASA Complex Burning Region Model and code so that is was applicable to the Advanced Solid Rocket Motor as envisioned at that time. The code so modified was called the CBRM-A. CBRM-A could calculate the port volume and burning area for the star, transition, and cylindrically perforated regions of the motor. Described here is a subsequent effort to add computation of port volume and burning area for the Advanced Solid Rocket Motor head dome. Sample output, input, and overview of the models are included. The software was configured in two forms - a stand alone head dome code and a code integrating the head dome solution with the CBRM-A.

  13. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  14. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, Peter

    1992-01-01

    Effort continued through this period to refine and expand the SIRIUS/ABACUS program package for CASSCF and RASSCF second derivatives. A new approach to computing the Gaussian integral derivatives that require much of the time in gradient and Hessian calculations was devised. Several different studies were undertaken in the area of application calculations. These include a study of proton transfer in the HF trimer, which provides an analog of rearrangement reactions, and the extension of our previous work on Be and Mg clusters to Ca clusters. In addition, a very accurate investigation of the lowest-lying potential curves of the O2 molecule was completed. These curves are essential for evaluating different models of the terrestrial atmosphere nightglow. The most effort this year was devoted to a large scale investigation of stationary points on the C4H4 surface, and the thermochemistry of acetylene/acetylene reaction.

  15. An overview of NASA intermittent combustion engine research

    NASA Technical Reports Server (NTRS)

    Willis, E. A.; Wintucky, W. T.

    1984-01-01

    This paper overviews the current program, whose objective is to establish the generic technology base for advanced aircraft I.C. engines of the early 1990's and beyond. The major emphasis of this paper is on development of the past two years. Past studies and ongoing confirmatory experimental efforts are reviewed, which show unexpectedly high potential when modern aerospace technologies are applied to inherently compact and balanced I.C. engine configurations. Currently, the program is focussed on two engine concepts, the stratified-charge, multi-fuel rotary and the lightweight two-stroke diesel. A review is given of contracted and planned high performance one-rotor and one-cylinder test engine work addressing several levels of technology. Also reviewed are basic supporting efforts, e.g., the development and experimental validation of computerized airflow and combustion process models, being performed in-house at Lewis Research Center and by university grants. Previously announced in STAR as N84-24583

  16. Configural approaches to temperament assessment: implications for predicting risk of unintentional injury in children.

    PubMed

    Berry, Jack W; Schwebel, David C

    2009-10-01

    This study used two configural approaches to understand how temperament factors (surgency/extraversion, negative affect, and effortful control) might predict child injury risk. In the first approach, clustering procedures were applied to trait dimensions to identify discrete personality prototypes. In the second approach, two- and three-way trait interactions were considered dimensionally in regression models predicting injury outcomes. Injury risk was assessed through four measures: lifetime prevalence of injuries requiring professional medical attention, scores on the Injury Behavior Checklist, and frequency and severity of injuries reported in a 2-week injury diary. In the prototype analysis, three temperament clusters were obtained, which resembled resilient, overcontrolled, and undercontrolled types found in previous research. Undercontrolled children had greater risk of injury than children in the other groups. In the dimensional interaction analyses, an interaction between surgency/extraversion and negative affect tended to predict injury, especially when children lacked capacity for effortful control.

  17. Computerized summary scoring: crowdsourcing-based latent semantic analysis.

    PubMed

    Li, Haiying; Cai, Zhiqiang; Graesser, Arthur C

    2017-11-03

    In this study we developed and evaluated a crowdsourcing-based latent semantic analysis (LSA) approach to computerized summary scoring (CSS). LSA is a frequently used mathematical component in CSS, where LSA similarity represents the extent to which the to-be-graded target summary is similar to a model summary or a set of exemplar summaries. Researchers have proposed different formulations of the model summary in previous studies, such as pregraded summaries, expert-generated summaries, or source texts. The former two methods, however, require substantial human time, effort, and costs in order to either grade or generate summaries. Using source texts does not require human effort, but it also does not predict human summary scores well. With human summary scores as the gold standard, in this study we evaluated the crowdsourcing LSA method by comparing it with seven other LSA methods that used sets of summaries from different sources (either experts or crowdsourced) of differing quality, along with source texts. Results showed that crowdsourcing LSA predicted human summary scores as well as expert-good and crowdsourcing-good summaries, and better than the other methods. A series of analyses with different numbers of crowdsourcing summaries demonstrated that the number (from 10 to 100) did not significantly affect performance. These findings imply that crowdsourcing LSA is a promising approach to CSS, because it saves human effort in generating the model summary while still yielding comparable performance. This approach to small-scale CSS provides a practical solution for instructors in courses, and also advances research on automated assessments in which student responses are expected to semantically converge on subject matter content.

  18. Efficient reinforcement learning of a reservoir network model of parametric working memory achieved with a cluster population winner-take-all readout mechanism.

    PubMed

    Cheng, Zhenbo; Deng, Zhidong; Hu, Xiaolin; Zhang, Bo; Yang, Tianming

    2015-12-01

    The brain often has to make decisions based on information stored in working memory, but the neural circuitry underlying working memory is not fully understood. Many theoretical efforts have been focused on modeling the persistent delay period activity in the prefrontal areas that is believed to represent working memory. Recent experiments reveal that the delay period activity in the prefrontal cortex is neither static nor homogeneous as previously assumed. Models based on reservoir networks have been proposed to model such a dynamical activity pattern. The connections between neurons within a reservoir are random and do not require explicit tuning. Information storage does not depend on the stable states of the network. However, it is not clear how the encoded information can be retrieved for decision making with a biologically realistic algorithm. We therefore built a reservoir-based neural network to model the neuronal responses of the prefrontal cortex in a somatosensory delayed discrimination task. We first illustrate that the neurons in the reservoir exhibit a heterogeneous and dynamical delay period activity observed in previous experiments. Then we show that a cluster population circuit decodes the information from the reservoir with a winner-take-all mechanism and contributes to the decision making. Finally, we show that the model achieves a good performance rapidly by shaping only the readout with reinforcement learning. Our model reproduces important features of previous behavior and neurophysiology data. We illustrate for the first time how task-specific information stored in a reservoir network can be retrieved with a biologically plausible reinforcement learning training scheme. Copyright © 2015 the American Physiological Society.

  19. Efforts to Bridge the Gap between Research and Practice in Social Work: Precedents and Prospects: Keynote Address at the Bridging the Gap Symposium

    ERIC Educational Resources Information Center

    Rubin, Allen

    2015-01-01

    This keynote address discusses previous and ongoing efforts to reduce the persistent gap between research and practice in social work and offers recommendations for further bridging that gap. Key among those recommendations is the need to conduct descriptive outcome studies of efforts to adapt research-supported interventions in everyday practice…

  20. Breeding season survival and breeding incidence of female Mottled Ducks on the upper Texas gulf coast

    USGS Publications Warehouse

    Rigby, Elizabeth A.; Haukos, David A.

    2012-01-01

    Previous Mottled Duck (Anas fulvigula) studies suggested that high female breeding season survival may be caused by low nesting effort, but few breeding season estimates of survival associated with nesting effort exist on the western Gulf Coast. Here, breeding season survival (N = 40) and breeding incidence (N = 39) were estimated for female Mottled Ducks on the upper Texas coast, 2006–2008. Females were fitted with backpack radio transmitters and visually relocated every 3–4 days. Weekly survival was estimated using the Known Fate procedure of program MARK with breeding incidence estimated as the annual proportion of females observed nesting or with broods. The top-ranked survival model included a body mass covariate and held weekly female survival constant across weeks and years (SW = 0.986, SE = 0.006). When compared to survival across the entire year estimated from previous band recovery and age ratio analysis, survival rate during the breeding season did not differ. Breeding incidence was well below 100% in all years and highly variable among years (15%–63%). Breeding season survival and breeding incidence were similar to estimates obtained with implant transmitters from the mid-coast of Texas. The greatest breeding incidence for both studies occurred when drought indices indicated average environmental moisture during the breeding season. The observed combination of low breeding incidence and high breeding season survival support the hypothesis of a trade-off between the ecological cost of nesting effort and survival for Mottled Duck females. Habitat cues that trigger nesting are unknown and should be investigated.

  1. Earth's external magnetic fields at low orbital altitudes

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.

    1990-01-01

    Under our Jun. 1987 proposal, Magnetic Signatures of Near-Earth Distributed Currents, we proposed to render operational a modeling procedure that had been previously developed to compute the magnetic effects of distributed currents flowing in the magnetosphere-ionosphere system. After adaptation of the software to our computing environment we would apply the model to low altitude satellite orbits and would utilize the MAGSAT data suite to guide the analysis. During the first year, basic computer codes to run model systems of Birkeland and ionospheric currents and several graphical output routines were made operational on a VAX 780 in our research facility. Software performance was evaluated using an input matchstick ionospheric current array, field aligned currents were calculated and magnetic perturbations along hypothetical satellite orbits were calculated. The basic operation of the model was verified. Software routines to analyze and display MAGSAT satellite data in terms of deviations with respect to the earth's internal field were also made operational during the first year effort. The complete set of MAGSAT data to be used for evaluation of the models was received at the end of the first year. A detailed annual report in May 1989 described these first year activities completely. That first annual report is included by reference in this final report. This document summarizes our additional activities during the second year of effort and describes the modeling software, its operation, and includes as an attachment the deliverable computer software specified under the contract.

  2. Work ability, effort-reward imbalance and disability pension claims.

    PubMed

    Wienert, J; Spanier, K; Radoschewski, F M; Bethge, M

    2017-12-30

    Effort-reward imbalance (ERI) and self-rated work ability are known independent correlates and predictors of intended disability pension claims. However, little research has focused on the interrelationship between the three and whether self-rated work ability mediates the relationship between ERI and intended disability pension claims. To investigate whether self-rated work ability mediates the association between ERI and intended disability pension claims. Baseline data from participants of the Third German Sociomedical Panel of Employees, a 5-year cohort study that investigates determinants of work ability, rehabilitation utilization and disability pensions in employees who have previously received sickness benefits, were analysed. We tested direct associations between ERI with intended disability pension claims (Model 1) and self-rated work ability (Model 2). Additionally, we tested whether work ability mediates the association between ERI and intended disability pension claims (Model 3). There were 2585 participants. Model 1 indicated a significant association between ERI and intended disability pension claims. Model 2 showed a significant association between ERI and self-rated work ability. The mediation in Model 3 revealed a significant indirect association between ERI and intended disability pension claims via self-rated work ability. There was no significant direct association between ERI and intended disability pension claims. Our results support the adverse health-related impact of ERI on self-rated work ability and intended disability pension claims. © The Author 2017. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Model parameter uncertainty analysis for an annual field-scale P loss model

    NASA Astrophysics Data System (ADS)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.

  4. Collaborative Project: The problem of bias in defining uncertainty in computationally enabled strategies for data-driven climate model development. Final Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huerta, Gabriel

    The objective of the project is to develop strategies for better representing scientific sensibilities within statistical measures of model skill that then can be used within a Bayesian statistical framework for data-driven climate model development and improved measures of model scientific uncertainty. One of the thorny issues in model evaluation is quantifying the effect of biases on climate projections. While any bias is not desirable, only those biases that affect feedbacks affect scatter in climate projections. The effort at the University of Texas is to analyze previously calculated ensembles of CAM3.1 with perturbed parameters to discover how biases affect projectionsmore » of global warming. The hypothesis is that compensating errors in the control model can be identified by their effect on a combination of processes and that developing metrics that are sensitive to dependencies among state variables would provide a way to select version of climate models that may reduce scatter in climate projections. Gabriel Huerta at the University of New Mexico is responsible for developing statistical methods for evaluating these field dependencies. The UT effort will incorporate these developments into MECS, which is a set of python scripts being developed at the University of Texas for managing the workflow associated with data-driven climate model development over HPC resources. This report reflects the main activities at the University of New Mexico where the PI (Huerta) and the Postdocs (Nosedal, Hattab and Karki) worked on the project.« less

  5. Detailed assessment of global transport-energy models’ structures and projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Sonia; Mishra, Gouri Shankar; Fulton, Lew

    This paper focuses on comparing the frameworks and projections from four major global transportation models with considerable transportation technology and behavioral detail. We analyze and compare the modeling frameworks, underlying data, assumptions, intermediate parameters, and projections to identify the sources of divergence or consistency, as well as key knowledge gaps. We find that there are significant differences in the base-year data and key parameters for future projections, especially for developing countries. These include passenger and freight activity, mode shares, vehicle ownership rates, and even energy consumption by mode, particularly for shipping, aviation and trucking. This may be due in partmore » to a lack of previous efforts to do such consistency-checking and “bench-marking.” We find that the four models differ in terms of the relative roles of various mitigation strategies to achieve a 2°C / 450 ppm CO2e target: the economics-based integrated assessment models favor the use of low carbon fuels as the primary mitigation option followed by efficiency improvements, whereas transport-only and expert-based models favor efficiency improvements of vehicles followed by mode shifts. We offer recommendations for future modeling improvements focusing on (1) reducing data gaps; (2) translating the findings from this study into relevant policy implications such as feasibility of current policy goals, additional policy targets needed, regional vs. global reductions, etc.; (3) modeling strata of demographic groups to improve understanding of vehicle ownership levels, travel behavior, and urban vs. rural considerations; and (4) conducting coordinated efforts in aligning input assumptions and historical data, policy analysis, and modeling insights.« less

  6. Micromechanical Modeling of Woven Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Pindera, Marek-Jerzy

    1997-01-01

    This report presents the results of an extensive micromechanical modeling effort for woven metal matrix composites. The model is employed to predict the mechanical response of 8-harness (8H) satin weave carbon/copper (C/Cu) composites. Experimental mechanical results for this novel high thermal conductivity material were recently reported by Bednarcyk et al. along with preliminary model results. The micromechanics model developed herein is based on an embedded approach. A micromechanics model for the local (micro-scale) behavior of the woven composite, the original method of cells (Aboudi), is embedded in a global (macro-scale) micromechanics model (the three-dimensional generalized method of cells (GMC-3D) (Aboudi). This approach allows representation of true repeating unit cells for woven metal matrix composites via GMC-3D, and representation of local effects, such as matrix plasticity, yarn porosity, and imperfect fiber-matrix bonding. In addition, the equations of GMC-3D were reformulated to significantly reduce the number of unknown quantities that characterize the deformation fields at the microlevel in order to make possible the analysis of actual microstructures of woven composites. The resulting micromechanical model (WCGMC) provides an intermediate level of geometric representation, versatility, and computational efficiency with respect to previous analytical and numerical models for woven composites, but surpasses all previous modeling work by allowing the mechanical response of a woven metal matrix composite, with an elastoplastic matrix, to be examined for the first time. WCGMC is employed to examine the effects of composite microstructure, porosity, residual stresses, and imperfect fiber-matrix bonding on the predicted mechanical response of 8H satin C/Cu. The previously reported experimental results are summarized, and the model predictions are compared to monotonic and cyclic tensile and shear test data. By considering appropriate levels of porosity, residual stresses, and imperfect fiber-matrix debonding, reasonably good qualitative and quantitative correlation is achieved between model and experiment.

  7. Psychosocial work factors in new or recurrent injuries among hospital workers: a prospective study.

    PubMed

    Lee, Soo-Jeong; You, Doohee; Gillen, Marion; Blanc, Paul D

    2015-11-01

    Accumulating evidence suggests an important role for psychosocial work factors in injury, but little is known about the interaction between psychosocial factors and previous injury experience on subsequent injury risk. We examined the relationships between psychosocial work factors and new or recurrent injury among hospital workers. We studied 492 hospital workers including 116 cases with baseline injury and 376 injury-free referents at baseline over follow-up. Job strain, total support, effort-reward imbalance, overcommitment, and musculoskeletal injury at baseline were examined in logistic regression models as predictors of new or recurrent injury experienced during a 2-year follow-up period. The overall cumulative incidence of injury over follow-up was 35.6 % (51.7 % for re-injury among baseline injury cases; 30.6 % for new injury among referents). Significantly increased risks with baseline job strain (OR 1.26; 95 % CI 1.02-1.55) and effort-reward imbalance (OR 1.42; 95 % CI 1.12-1.81) were observed for injury only among the referents. Overcommitment was associated with increased risk of injury only among the cases (OR 1.58; 95 % CI 1.05-2.39). The effects of psychosocial work factors on new or recurrent injury risk appear to differ by previous injury experience, suggesting the need for differing preventive strategies in hospital workers.

  8. Winning and losing: an evolutionary approach to mood disorders and their therapy.

    PubMed

    Sloman, Leon; Sturman, Edward D; Price, John S

    2011-06-01

    To advance a new evolutionary model that examines the effects of winning and losing on mood and physiological variables. Previous studies have focused on the involuntary defeat strategy in de-escalating conflict. Here, we propose that there also exists an involuntary winning strategy (IWS) that is triggered by success and characterized by euphoria and increased self-confidence. It motivates efforts to challenge, and promotes reconciliation. Previous studies are presented, including data on student athletes, demonstrating the impact of winning and losing on mood. Winning is consistently shown to be related to physiological changes such as increased testosterone and serotonin levels in primates. It reliably leads to mood changes that serve to motivate winners to continue their competitive efforts. When the IWS functions optimally, success leads to success in an adaptive cycle. Over time, the initial differences between the winners and losers of agonistic encounters become magnified in a process known as difference amplification. As a result of assortative mating, the children of people who have entered into an adaptive cycle will inherit traits from both parents that will, in turn, give them an increased competitive advantage. In this manner, difference amplification could have accelerated human evolution by natural selection. Vignettes of clinical interventions are also used to illustrate therapeutic strategies designed to disrupt maladaptive cycles and promote adaptive behaviour.

  9. Decadal trends in global pelagic ocean chlorophyll: A new assessment integrating multiple satellites, in situ data, and models.

    PubMed

    Gregg, Watson W; Rousseaux, Cécile S

    2014-09-01

    Quantifying change in ocean biology using satellites is a major scientific objective. We document trends globally for the period 1998-2012 by integrating three diverse methodologies: ocean color data from multiple satellites, bias correction methods based on in situ data, and data assimilation to provide a consistent and complete global representation free of sampling biases. The results indicated no significant trend in global pelagic ocean chlorophyll over the 15 year data record. These results were consistent with previous findings that were based on the first 6 years and first 10 years of the SeaWiFS mission. However, all of the Northern Hemisphere basins (north of 10° latitude), as well as the Equatorial Indian basin, exhibited significant declines in chlorophyll. Trend maps showed the local trends and their change in percent per year. These trend maps were compared with several other previous efforts using only a single sensor (SeaWiFS) and more limited time series, showing remarkable consistency. These results suggested the present effort provides a path forward to quantifying global ocean trends using multiple satellite missions, which is essential if we are to understand the state, variability, and possible changes in the global oceans over longer time scales.

  10. The development of a survey instrument for community health improvement.

    PubMed Central

    Bazos, D A; Weeks, W B; Fisher, E S; DeBlois, H A; Hamilton, E; Young, M J

    2001-01-01

    OBJECTIVE: To develop a survey instrument that could be used both to guide and evaluate community health improvement efforts. DATA SOURCES/STUDY SETTING: A randomized telephone survey was administered to a sample of about 250 residents in two communities in Lehigh Valley, Pennsylvania in the fall of 1997. METHODS: The survey instrument was developed by health professionals representing diverse health care organizations. This group worked collaboratively over a period of two years to (1) select a conceptual model of health as a foundation for the survey; (2) review relevant literature to identify indicators that adequately measured the health constructs within the chosen model; (3) develop new indicators where important constructs lacked specific measures; and (4) pilot test the final survey to assess the reliability and validity of the instrument. PRINCIPAL FINDINGS: The Evans and Stoddart Field Model of the Determinants of Health and Well-Being was chosen as the conceptual model within which to develop the survey. The Field Model depicts nine domains important to the origins and production of health and provides a comprehensive framework from which to launch community health improvement efforts. From more than 500 potential indicators we identified 118 survey questions that reflected the multiple determinants of health as conceptualized by this model. Sources from which indicators were selected include the Behavior Risk Factor Surveillance Survey, the National Health Interview Survey, the Consumer Assessment of Health Plans Survey, and the SF-12 Summary Scales. The work group developed 27 new survey questions for constructs for which we could not locate adequate indicators. Twenty-five questions in the final instrument can be compared to nationally published norms or benchmarks. The final instrument was pilot tested in 1997 in two communities. Administration time averaged 22 minutes with a response rate of 66 percent. Reliability of new survey questions was adequate. Face validity was supported by previous findings from qualitative and quantitative studies. CONCLUSIONS: We developed, pilot tested, and validated a survey instrument designed to provide more comprehensive and timely data to communities for community health assessments. This instrument allows communities to identify and measure critical domains of health that have previously not been captured in a single instrument. PMID:11508639

  11. Status and threats analysis for the Florida manatee (Trichechus manatus latirostris), 2012

    USGS Publications Warehouse

    Runge, Michael C.; Langtimm, Catherine A.; Martin, Julien; Fonnesbeck, Christopher J.

    2015-01-01

    The endangered West Indian manatee (Trichechus manatus), especially the Florida subspecies (T. m. latirostris), has been the focus of conservation efforts and extensive research since its listing under the Endangered Species Act. On the basis of the best information available as of December 2012, the threats facing the Florida manatee were determined to be less severe than previously thought, either because the conservation efforts have been successful, or because our knowledge of the demographic effects of those threats is increased, or both. Using the manatee Core Biological Model, we estimated the probability of the Florida manatee population on either the Atlantic or Gulf coast falling below 500 adults in the next 150 years to be 0.92 percent. The primary threats remain watercraft-related mortality and long-term loss of warm-water habitat. Since 2009, however, there have been a number of unusual events that have not yet been incorporated into this analysis, including several severely cold winters, a severe red-tide die off, and substantial loss of seagrass habitat in Brevard County, Fla. Further, the version of the Core Biological Model used in 2012 makes a number of assumptions that are under investigation. A revision of the Core Biological Model and an update of this quantitative threats analysis are underway as of 2015.

  12. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  13. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  14. Design Evolution of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Peters, Carlton; Rodriguez, Juan; McDonald, Carson; Content, David A.; Jackson, Cliff

    2015-01-01

    The design of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) continues to evolve as each design cycle is analyzed. In 2012, two Hubble sized (2.4 m diameter) telescopes were donated to NASA from elsewhere in the Federal Government. NASA began investigating potential uses for these telescopes and identified WFIRST as a mission to benefit from these assets. With an updated, deeper, and sharper field of view than previous design iterations with a smaller telescope, the optical designs of the WFIRST instruments were updated and the mechanical and thermal designs evolved around the new optical layout. Beginning with Design Cycle 3, significant analysis efforts yielded a design and model that could be evaluated for Structural-Thermal-Optical-Performance (STOP) purposes for the Wide Field Imager (WFI) and provided the basis for evaluating the high level observatory requirements. Development of the Cycle 3 thermal model provided some valuable analysis lessons learned and established best practices for future design cycles. However, the Cycle 3 design did include some major liens and evolving requirements which were addressed in the Cycle 4 Design. Some of the design changes are driven by requirements changes, while others are optimizations or solutions to liens from previous cycles. Again in Cycle 4, STOP analysis was performed and further insights into the overall design were gained leading to the Cycle 5 design effort currently underway. This paper seeks to capture the thermal design evolution, with focus on major design drivers, key decisions and their rationale, and lessons learned as the design evolved.

  15. Design Evolution of the Wide Field Infrared Survey Telescope Using Astrophysics Focused Telescope Assets (WFIRST-AFTA) and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.; Peters, Carlton V.; Rodriguez-Ruiz, Juan E.; McDonald, Carson S.; Content, David A.; Jackson, Clifton E.

    2015-01-01

    The design of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) continues to evolve as each design cycle is analyzed. In 2012, two Hubble sized (2.4 m diameter) telescopes were donated to NASA from elsewhere in the Federal Government. NASA began investigating potential uses for these telescopes and identified WFIRST as a mission to benefit from these assets. With an updated, deeper, and sharper field of view than previous design iterations with a smaller telescope, the optical designs of the WFIRST instruments were updated and the mechanical and thermal designs evolved around the new optical layout. Beginning with Design Cycle 3, significant analysis efforts yielded a design and model that could be evaluated for Structural-Thermal-Optical-Performance (STOP) purposes for the Wide Field Imager (WFI) and provided the basis for evaluating the high level observatory requirements. Development of the Cycle 3 thermal model provided some valuable analysis lessons learned and established best practices for future design cycles. However, the Cycle 3 design did include some major liens and evolving requirements which were addressed in the Cycle 4 Design. Some of the design changes are driven by requirements changes, while others are optimizations or solutions to liens from previous cycles. Again in Cycle 4, STOP analysis was performed and further insights into the overall design were gained leading to the Cycle 5 design effort currently underway. This paper seeks to capture the thermal design evolution, with focus on major design drivers, key decisions and their rationale, and lessons learned as the design evolved.

  16. A New Chicken Genome Assembly Provides Insight into Avian Genome Structure.

    PubMed

    Warren, Wesley C; Hillier, LaDeana W; Tomlinson, Chad; Minx, Patrick; Kremitzki, Milinn; Graves, Tina; Markovic, Chris; Bouk, Nathan; Pruitt, Kim D; Thibaud-Nissen, Francoise; Schneider, Valerie; Mansour, Tamer A; Brown, C Titus; Zimin, Aleksey; Hawken, Rachel; Abrahamsen, Mitch; Pyrkosz, Alexis B; Morisson, Mireille; Fillon, Valerie; Vignal, Alain; Chow, William; Howe, Kerstin; Fulton, Janet E; Miller, Marcia M; Lovell, Peter; Mello, Claudio V; Wirthlin, Morgan; Mason, Andrew S; Kuo, Richard; Burt, David W; Dodgson, Jerry B; Cheng, Hans H

    2017-01-05

    The importance of the Gallus gallus (chicken) as a model organism and agricultural animal merits a continuation of sequence assembly improvement efforts. We present a new version of the chicken genome assembly (Gallus_gallus-5.0; GCA_000002315.3), built from combined long single molecule sequencing technology, finished BACs, and improved physical maps. In overall assembled bases, we see a gain of 183 Mb, including 16.4 Mb in placed chromosomes with a corresponding gain in the percentage of intact repeat elements characterized. Of the 1.21 Gb genome, we include three previously missing autosomes, GGA30, 31, and 33, and improve sequence contig length 10-fold over the previous Gallus_gallus-4.0. Despite the significant base representation improvements made, 138 Mb of sequence is not yet located to chromosomes. When annotated for gene content, Gallus_gallus-5.0 shows an increase of 4679 annotated genes (2768 noncoding and 1911 protein-coding) over those in Gallus_gallus-4.0. We also revisited the question of what genes are missing in the avian lineage, as assessed by the highest quality avian genome assembly to date, and found that a large fraction of the original set of missing genes are still absent in sequenced bird species. Finally, our new data support a detailed map of MHC-B, encompassing two segments: one with a highly stable gene copy number and another in which the gene copy number is highly variable. The chicken model has been a critical resource for many other fields of study, and this new reference assembly will substantially further these efforts. Copyright © 2017 Warren et al.

  17. Expanded research and development of an enhanced rear signaling system for commercial motor vehicles.

    DOT National Transportation Integrated Search

    2014-04-01

    The purpose of the current study was to further develop and refine the prototype Enhanced Rear Signaling (ERS) : system that was developed during the previous Phase III effort. Expanded development efforts for the ERS system : included modification o...

  18. The Dispassionate Discourse of Children's Adjustment to Divorce.

    ERIC Educational Resources Information Center

    Allen, Katherine R.

    1993-01-01

    Responds to previous article by Amato on children's adjustment to divorce. Applauds Amato's efforts, but sees efforts hindered by insufficient reporting and inconsistent use of empirical literature, unsupported speculations about inconsistencies found in some hypotheses, and unacknowledged bias toward traditional family structure. Discusses many…

  19. The sugar-sweetened beverage wars: public health and the role of the beverage industry

    PubMed Central

    Welsh, Jean A.; Lundeen, Elizabeth A.; Stein, Aryeh D.

    2015-01-01

    Purpose of review To discuss the current data on sugar-sweetened beverage (SSB) consumption trends, evidence of the health impact, and the role of industry in efforts to reduce the consumption. Recent findings Previously rising SSB consumption rates have declined recently, but continue to contribute added sugars beyond the limit advised by the American Heart Association. A recent meta-analysis concluded that SSBs likely increase body weight and recent long-term studies support the previous findings of increased risk of diabetes, dyslipidemia, and hypertension. Beverage companies have played an active role in some SSB reduction efforts by reducing the sale of SSBs in schools, limiting television advertising to children, and increasing the availability of smaller portion-size options. Industry has opposed efforts to restrict the availability of large portion sizes and implement an excise tax. Current industry efforts include the promotion of alternative beverages perceived to be healthier as well as SSBs through Internet and social media. Summary Continuing high SSB consumption and associated health risks highlight the need for further public health action. The beverage industry has supported some efforts to reduce the consumption of full sugar beverages, but has actively opposed others. The impact of industry efforts to promote beverage alternatives perceived as healthier is unknown. PMID:23974767

  20. The sugar-sweetened beverage wars: public health and the role of the beverage industry.

    PubMed

    Welsh, Jean A; Lundeen, Elizabeth A; Stein, Aryeh D

    2013-10-01

    To discuss the current data on sugar-sweetened beverage (SSB) consumption trends, evidence of the health impact, and the role of industry in efforts to reduce the consumption. Previously rising SSB consumption rates have declined recently, but continue to contribute added sugars beyond the limit advised by the American Heart Association. A recent meta-analysis concluded that SSBs likely increase body weight and recent long-term studies support the previous findings of increased risk of diabetes, dyslipidemia, and hypertension. Beverage companies have played an active role in some SSB reduction efforts by reducing the sale of SSBs in schools, limiting television advertising to children, and increasing the availability of smaller portion-size options. Industry has opposed efforts to restrict the availability of large portion sizes and implement an excise tax. Current industry efforts include the promotion of alternative beverages perceived to be healthier as well as SSBs through Internet and social media. Continuing high SSB consumption and associated health risks highlight the need for further public health action. The beverage industry has supported some efforts to reduce the consumption of full sugar beverages, but has actively opposed others. The impact of industry efforts to promote beverage alternatives perceived as healthier is unknown.

  1. Dynamic Modeling, Controls, and Testing for Electrified Aircraft

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph; Stalcup, Erik

    2017-01-01

    Electrified aircraft have the potential to provide significant benefits for efficiency and emissions reductions. To assess these potential benefits, modeling tools are needed to provide rapid evaluation of diverse concepts and to ensure safe operability and peak performance over the mission. The modeling challenge for these vehicles is the ability to show significant benefits over the current highly refined aircraft systems. The STARC-ABL (single-aisle turbo-electric aircraft with an aft boundary layer propulsor) is a new test proposal that builds upon previous N3-X team hybrid designs. This presentation describes the STARC-ABL concept, the NASA Electric Aircraft Testbed (NEAT) which will allow testing of the STARC-ABL powertrain, and the related modeling and simulation efforts to date. Modeling and simulation includes a turbofan simulation, Numeric Propulsion System Simulation (NPSS), which has been integrated with NEAT; and a power systems and control model for predicting testbed performance and evaluating control schemes. Model predictions provide good comparisons with testbed data for an NPSS-integrated test of the single-string configuration of NEAT.

  2. Adaptive effort investment in cognitive and physical tasks: a neurocomputational model

    PubMed Central

    Verguts, Tom; Vassena, Eliana; Silvetti, Massimo

    2015-01-01

    Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species. PMID:25805978

  3. Model Checking JAVA Programs Using Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Pressburger, Thomas

    2000-01-01

    This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.

  4. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  5. A model for assessing the risk of human trafficking on a local level

    NASA Astrophysics Data System (ADS)

    Colegrove, Amanda

    Human trafficking is a human rights violation that is difficult to quantify. Models for estimating the number of victims of trafficking presented by previous researchers depend on inconsistent, poor quality data. As an intermediate step to help current efforts by nonprofits to combat human trafficking, this project presents a model that is not dependent on quantitative data specific to human trafficking, but rather profiles the risk of human trafficking at the local level through causative factors. Businesses, indicated by the literature, were weighted based on the presence of characteristics that increase the likelihood of trafficking in persons. The mean risk was calculated by census tract to reveal the multiplicity of risk levels in both rural and urban settings. Results indicate that labor trafficking may be a more diffuse problem in Missouri than sex trafficking. Additionally, spatial patterns of risk remained largely the same regardless of adjustments made to the model.

  6. A comprehensive method for preliminary design optimization of axial gas turbine stages. II - Code verification

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1983-01-01

    The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.

  7. Generational forecasting in academic medicine: a unique method of planning for success in the next two decades.

    PubMed

    Howell, Lydia Pleotis; Joad, Jesse P; Callahan, Edward; Servis, Gregg; Bonham, Ann C

    2009-08-01

    Multigenerational teams are essential to the missions of academic health centers (AHCs). Generational forecasting using Strauss and Howe's predictive model, "the generational diagonal," can be useful for anticipating and addressing issues so that each generation is effective. Forecasts are based on the observation that cyclical historical events are experienced by all generations, but the response of each generation differs according to its phase of life and previous defining experiences. This article relates Strauss and Howe's generational forecasts to AHCs. Predicted issues such as work-life balance, indebtedness, and succession planning have existed previously, but they now have different causes or consequences because of the unique experiences and life stages of current generations. Efforts to address these issues at the authors' AHC include a work-life balance workgroup, expanded leave, and intramural grants.

  8. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  9. Physical, Hydraulic, and Transport Properties of Sediments and Engineered Materials Associated with Hanford Immobilized Low-Activity Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rockhold, Mark L.; Zhang, Z. F.; Meyer, Philip D.

    2015-02-28

    Current plans for treatment and disposal of immobilized low-activity waste (ILAW) from Hanford’s underground waste storage tanks include vitrification and storage of the glass waste form in a nearsurface disposal facility. This Integrated Disposal Facility (IDF) is located in the 200 East Area of the Hanford Central Plateau. Performance assessment (PA) of the IDF requires numerical modeling of subsurface flow and reactive transport processes over very long periods (thousands of years). The models used to predict facility performance require parameters describing various physical, hydraulic, and transport properties. This report provides updated estimates of physical, hydraulic, and transport properties and parametersmore » for both near- and far-field materials, intended for use in future IDF PA modeling efforts. Previous work on physical and hydraulic property characterization for earlier IDF PA analyses is reviewed and summarized. For near-field materials, portions of this document and parameter estimates are taken from an earlier data package. For far-field materials, a critical review is provided of methodologies used in previous data packages. Alternative methods are described and associated parameters are provided.« less

  10. Continued Development and Improvement of Pneumatic Heavy Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert J. Englar

    2005-07-15

    The objective of this applied research effort led by Georgia Tech Research Institute is the application of pneumatic aerodynamic technology previously developed and patented by us to the design of an appropriate Heavy Vehicle (HV) tractor-trailer configuration, and experimental confirmation of this pneumatic configuration's improved aerodynamic characteristics. In Phases I to IV of our previous DOE program (Reference 1), GTRI has developed, patented, wind-tunnel tested and road-tested blown aerodynamic devices for Pneumatic Heavy Vehicles (PHVs) and Pneumatic Sports Utility Vehicles (PSUVs). To further advance these pneumatic technologies towards HV and SUV applications, additional Phase V tasks were included in themore » first year of a continuing DOE program (Reference 2). Based on the results of the Phase IV full-scale test programs, these Phase V tasks extended the application of pneumatic aerodynamics to include: further economy and performance improvements; increased aerodynamic stability and control; and safety of operation of Pneumatic HVs. Continued development of a Pneumatic SUV was also conducted during the Phase V program. Phase V was completed in July, 2003; its positive results towards development and confirmation of this pneumatic technology are reported in References 3 and 4. The current Phase VI of this program was incrementally funded by DOE in order to continue this technology development towards a second fuel economy test on the Pneumatic Heavy Vehicle. The objectives of this current Phase VI research and development effort (Ref. 5) fall into two categories: (1) develop improved pneumatic aerodynamic technology and configurations on smaller-scale models of the advanced Pneumatic Heavy Vehicle (PHV); and based on these findings, (2) redesign, modify, and re-test the modified full-scale PHV test vehicle. This second objective includes conduct of an on-road preliminary road test of this configuration to prepare it for a second series of SAE Type-U fuel economy evaluations, as described in Ref. 5. Both objectives are based on the pneumatic technology already developed and confirmed for DOE OHVT/OAAT in Phases I-V. This new Phase VI effort was initiated by contract amendment to the Phase V effort using carryover FY02 funds. This were conducted under a new and distinct project number, GTRI Project A-6935, separate from the Phase I-IV program. However, the two programs are closely integrated, and thus Phase VI continues with the previous program and goals.« less

  11. Evaluation of model-based methods in estimating respiratory mechanics in the presence of variable patient effort.

    PubMed

    Redmond, Daniel P; Chiew, Yeong Shiong; Major, Vincent; Chase, J Geoffrey

    2016-09-23

    Monitoring of respiratory mechanics is required for guiding patient-specific mechanical ventilation settings in critical care. Many models of respiratory mechanics perform poorly in the presence of variable patient effort. Typical modelling approaches either attempt to mitigate the effect of the patient effort on the airway pressure waveforms, or attempt to capture the size and shape of the patient effort. This work analyses a range of methods to identify respiratory mechanics in volume controlled ventilation modes when there is patient effort. The models are compared using 4 Datasets, each with a sample of 30 breaths before, and 2-3 minutes after sedation has been administered. The sedation will reduce patient efforts, but the underlying pulmonary mechanical properties are unlikely to change during this short time. Model identified parameters from breathing cycles with patient effort are compared to breathing cycles that do not have patient effort. All models have advantages and disadvantages, so model selection may be specific to the respiratory mechanics application. However, in general, the combined method of iterative interpolative pressure reconstruction, and stacking multiple consecutive breaths together has the best performance over the Dataset. The variability of identified elastance when there is patient effort is the lowest with this method, and there is little systematic offset in identified mechanics when sedation is administered. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Gemini Planet Imager Spectroscopy of the HR 8799 Planets c and d

    DOE PAGES

    Ingraham, Patrick; Marley, Mark S.; Saumon, Didier; ...

    2014-09-30

    During the first-light run of the Gemini Planet Imager we obtained K-band spectra of exoplanets HR 8799 c and d. Analysis of the spectra indicates that planet d may be warmer than planet c. Comparisons to recent patchy cloud models and previously obtained observations over multiple wavelengths confirm that thick clouds combined with horizontal variation in the cloud cover generally reproduce the planets’ spectral energy distributions.When combined with the 3 to 4μm photometric data points, the observations provide strong constraints on the atmospheric methane content for both planets. Lastly, the data also provide further evidence that future modeling efforts mustmore » include cloud opacity, possibly including cloud holes, disequilibrium chemistry, and super-solar metallicity.« less

  13. Probing sunspots with two-skip time-distance helioseismology

    NASA Astrophysics Data System (ADS)

    Duvall, Thomas L., Jr.; Cally, Paul S.; Przybylski, Damien; Nagashima, Kaori; Gizon, Laurent

    2018-06-01

    Context. Previous helioseismology of sunspots has been sensitive to both the structural and magnetic aspects of sunspot structure. Aims: We aim to develop a technique that is insensitive to the magnetic component so the two aspects can be more readily separated. Methods: We study waves reflected almost vertically from the underside of a sunspot. Time-distance helioseismology was used to measure travel times for the waves. Ray theory and a detailed sunspot model were used to calculate travel times for comparison. Results: It is shown that these large distance waves are insensitive to the magnetic field in the sunspot. The largest travel time differences for any solar phenomena are observed. Conclusions: With sufficient modeling effort, these should lead to better understanding of sunspot structure.

  14. Improving the Calibration of the SN Ia Anchor Datasets with a Bayesian Hierarchal Model

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Rubin, David

    2018-01-01

    Inter-survey calibration remains one of the largest systematic uncertainties in SN Ia cosmology today. Ideally, each survey would measure their system throughputs and observe well characterized spectrophotometric standard stars, but many important surveys have not done so. For these surveys, we calibrate using tertiary survey stars tied to SDSS and Pan-STARRS. We improve on previous efforts by taking the spatially variable response of each telescope/camera into account, and using improved color transformations in the surveys’ natural instrumental photometric system. We use a global hierarchical model of the data, automatically providing a covariance matrix of magnitude offsets and bandpass shifts which reduces the systematic uncertainty in inter-survey calibration, thereby providing better cosmological constraints.

  15. The effort-reward imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women.

    PubMed

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-09-17

    We examined the influence of work-related effort-reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort-Reward Imbalance (ERI) model, on the hypothalamic-pituitary-adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort-reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort-reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort-reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported.

  16. Fertilizer Induced Nitrate Pollution in RCW: Calibration of the DNDC Model

    NASA Astrophysics Data System (ADS)

    El Hailouch, E.; Hornberger, G.; Crane, J. W.

    2012-12-01

    Fertilizer is widely used among urban and suburban households due to the socially driven attention of homeowners to lawn appearance. With high nitrogen content, fertilizer considerably impacts the environment through the emission of the highly potent greenhouse gas nitrous oxide and the leaching of nitrate. Nitrate leaching is significantly important because fertilizer sourced nitrate that is partially leached into soil causes groundwater pollution. In an effort to model the effect of fertilizer application on the environment, the geochemical DeNitrification-DeComposition model (DNDC) was previously developed to quantitatively measure the effects of fertilizer use. The purpose of this study is to use this model more effectively on a large scale through a measurement based calibration. For this reason, leaching was measured and studied on 12 sites in the Richland Creek Watershed (RCW). Information about the fertilization and irrigation regimes of these sites was collected, along with lysimeter readings that gave nitrate fluxes in the soil. A study of the amount and variation in nitrate leaching with respect to the varying geographical locations, time of the year, and fertilization and irrigation regimes has lead to a better understanding of the driving forces behind nitrate leaching. Quantifying the influence of each of these parameters allows for a more accurate calibration of the model thus permitting use that extends beyond the RCW. Measurement of nitrate leaching on a statewide or nationwide level in turn will help guide efforts in the reduction of groundwater pollution caused by fertilizer.

  17. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells

    PubMed Central

    Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W.; Kulkarni, Jayant; Litke, Alan M.; Chichilnisky, E. J.; Simoncelli, Eero; Paninski, Liam

    2013-01-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations. PMID:22203465

  18. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  19. Ras Dimer Formation as a New Signaling Mechanism and Potential Cancer Therapeutic Target

    PubMed Central

    Chen, Mo; Peters, Alec; Huang, Tao; Nan, Xiaolin

    2016-01-01

    The K-, N-, and HRas small GTPases are key regulators of cell physiology and are frequently mutated in human cancers. Despite intensive research, previous efforts to target hyperactive Ras based on known mechanisms of Ras signaling have been met with little success. Several studies have provided compelling evidence for the existence and biological relevance of Ras dimers, establishing a new mechanism for regulating Ras activity in cells additionally to GTP-loading and membrane localization. Existing data also start to reveal how Ras proteins dimerize on the membrane. We propose a dimer model to describe Ras-mediated effector activation, which contrasts existing models of Ras signaling as a monomer or as a 5-8 membered multimer. We also discuss potential implications of this model in both basic and translational Ras biology. PMID:26423697

  20. Development of an inter-atomic potential for the Pd-H binary system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Jonathan A.; Hoyt, Jeffrey John; Leonard, Francois Leonard

    2007-09-01

    Ongoing research at Sandia National Laboratories has been in the area of developing models and simulation methods that can be used to uncover and illuminate the material defects created during He bubble growth in aging bulk metal tritides. Previous efforts have used molecular dynamics calculations to examine the physical mechanisms by which growing He bubbles in a Pd metal lattice create material defects. However, these efforts focused only on the growth of He bubbles in pure Pd and not on bubble growth in the material of interest, palladium tritide (PdT), or its non-radioactive isotope palladium hydride (PdH). The reason formore » this is that existing inter-atomic potentials do not adequately describe the thermodynamics of the Pd-H system, which includes a miscibility gap that leads to phase separation of the dilute (alpha) and concentrated (beta) alloys of H in Pd at room temperature. This document will report the results of research to either find or develop inter-atomic potentials for the Pd-H and Pd-T systems, including our efforts to use experimental data and density functional theory calculations to create an inter-atomic potential for this unique metal alloy system.« less

  1. Monkeys reject unequal pay.

    PubMed

    Brosnan, Sarah F; De Waal, Frans B M

    2003-09-18

    During the evolution of cooperation it may have become critical for individuals to compare their own efforts and pay-offs with those of others. Negative reactions may occur when expectations are violated. One theory proposes that aversion to inequity can explain human cooperation within the bounds of the rational choice model, and may in fact be more inclusive than previous explanations. Although there exists substantial cultural variation in its particulars, this 'sense of fairness' is probably a human universal that has been shown to prevail in a wide variety of circumstances. However, we are not the only cooperative animals, hence inequity aversion may not be uniquely human. Many highly cooperative nonhuman species seem guided by a set of expectations about the outcome of cooperation and the division of resources. Here we demonstrate that a nonhuman primate, the brown capuchin monkey (Cebus apella), responds negatively to unequal reward distribution in exchanges with a human experimenter. Monkeys refused to participate if they witnessed a conspecific obtain a more attractive reward for equal effort, an effect amplified if the partner received such a reward without any effort at all. These reactions support an early evolutionary origin of inequity aversion.

  2. An Account of Women's Progress in Engineering: a Social Cognitive Perspective

    NASA Astrophysics Data System (ADS)

    Vogt, Christina

    Traditionally, women were not welcome in higher education, especially in male-dominated fields. Undoubtedly, women have dramatically increased their enrollments in many once male-only fields, such as law, medicine, and several of the sciences; nevertheless, engineering remains a field where women continue to be underrepresented. This has often been attributed to social barriers in engineering classrooms. However, a new turn of events has been reported: Young women entering engineering may receive higher grades and have a greater tendency to remain than men. To examine what has recently changed, the author applied Bandura's triadic model of reciprocity between environment, self, and behavior. The measured variables included academic integration or discrimination, self-measures of academic self-confidence, engineering self-efficacy, and behaviors taken to self-regulate learning: critical thinking, effort, peer learning, and help seeking. The data revealed that women apply slightly more effort and have slightly less self-efficacy than men. Their academic confidence is nearly equal in almost all areas. Most significantly, many previous gender biases appear diminished, and those that do exist are slight. However, it is recommended that continued efforts be undertaken to attract and retain women in engineering programs.

  3. Oklahoma experiences largest earthquake during ongoing regional wastewater injection hazard mitigation efforts

    USGS Publications Warehouse

    Yeck, William; Hayes, Gavin; McNamara, Daniel E.; Rubinstein, Justin L.; Barnhart, William; Earle, Paul; Benz, Harley M.

    2017-01-01

    The 3 September 2016, Mw 5.8 Pawnee earthquake was the largest recorded earthquake in the state of Oklahoma. Seismic and geodetic observations of the Pawnee sequence, including precise hypocenter locations and moment tensor modeling, shows that the Pawnee earthquake occurred on a previously unknown left-lateral strike-slip basement fault that intersects the mapped right-lateral Labette fault zone. The Pawnee earthquake is part of an unprecedented increase in the earthquake rate in Oklahoma that is largely considered the result of the deep injection of waste fluids from oil and gas production. If this is, indeed, the case for the M5.8 Pawnee earthquake, then this would be the largest event to have been induced by fluid injection. Since 2015, Oklahoma has undergone wide-scale mitigation efforts primarily aimed at reducing injection volumes. Thus far in 2016, the rate of M3 and greater earthquakes has decreased as compared to 2015, while the cumulative moment—or energy released from earthquakes—has increased. This highlights the difficulty in earthquake hazard mitigation efforts given the poorly understood long-term diffusive effects of wastewater injection and their connection to seismicity.

  4. Moisture sensitivity of hot mix asphalt (HMA) mixtures in Nebraska : phase II.

    DOT National Transportation Integrated Search

    2009-12-01

    As a consequential effort to the previous NDOR research project (P564) on moisture damage, this report presents : outcomes from this project incorporated with the previous project. Performance changes and fundamental material : characteristics associ...

  5. Modeling and control for closed environment plant production systems

    NASA Technical Reports Server (NTRS)

    Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2002-01-01

    A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.

  6. Predicted deep-sea coral habitat suitability for the U.S. West coast.

    PubMed

    Guinotte, John M; Davies, Andrew J

    2014-01-01

    Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled.

  7. Predicted Deep-Sea Coral Habitat Suitability for the U.S. West Coast

    PubMed Central

    Guinotte, John M.; Davies, Andrew J.

    2014-01-01

    Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled. PMID:24759613

  8. The Impact of Attention on Judgments of Frequency and Duration

    PubMed Central

    Winkler, Isabell; Glauer, Madlen; Betsch, Tilmann; Sedlmeier, Peter

    2015-01-01

    Previous studies that examined human judgments of frequency and duration found an asymmetrical relationship: While frequency judgments were quite accurate and independent of stimulus duration, duration judgments were highly dependent upon stimulus frequency. A potential explanation for these findings is that the asymmetry is moderated by the amount of attention directed to the stimuli. In the current experiment, participants' attention was manipulated in two ways: (a) intrinsically, by varying the type and arousal potential of the stimuli (names, low-arousal and high-arousal pictures), and (b) extrinsically, by varying the physical effort participants expended during the stimulus presentation (by lifting a dumbbell vs. relaxing the arm). Participants processed stimuli with varying presentation frequencies and durations and were subsequently asked to estimate the frequency and duration of each stimulus. Sensitivity to duration increased for pictures in general, especially when processed under physical effort. A large effect of stimulus frequency on duration judgments was obtained for all experimental conditions, but a similar large effect of presentation duration on frequency judgments emerged only in the conditions that could be expected to draw high amounts of attention to the stimuli: when pictures were judged under high physical effort. Almost no difference in the mutual impact of frequency and duration was obtained for low-arousal or high-arousal pictures. The mechanisms underlying the simultaneous processing of frequency and duration are discussed with respect to existing models derived from animal research. Options for the extension of such models to human processing of frequency and duration are suggested. PMID:26000712

  9. The impact of attention on judgments of frequency and duration.

    PubMed

    Winkler, Isabell; Glauer, Madlen; Betsch, Tilmann; Sedlmeier, Peter

    2015-01-01

    Previous studies that examined human judgments of frequency and duration found an asymmetrical relationship: While frequency judgments were quite accurate and independent of stimulus duration, duration judgments were highly dependent upon stimulus frequency. A potential explanation for these findings is that the asymmetry is moderated by the amount of attention directed to the stimuli. In the current experiment, participants' attention was manipulated in two ways: (a) intrinsically, by varying the type and arousal potential of the stimuli (names, low-arousal and high-arousal pictures), and (b) extrinsically, by varying the physical effort participants expended during the stimulus presentation (by lifting a dumbbell vs. relaxing the arm). Participants processed stimuli with varying presentation frequencies and durations and were subsequently asked to estimate the frequency and duration of each stimulus. Sensitivity to duration increased for pictures in general, especially when processed under physical effort. A large effect of stimulus frequency on duration judgments was obtained for all experimental conditions, but a similar large effect of presentation duration on frequency judgments emerged only in the conditions that could be expected to draw high amounts of attention to the stimuli: when pictures were judged under high physical effort. Almost no difference in the mutual impact of frequency and duration was obtained for low-arousal or high-arousal pictures. The mechanisms underlying the simultaneous processing of frequency and duration are discussed with respect to existing models derived from animal research. Options for the extension of such models to human processing of frequency and duration are suggested.

  10. On Impact: Students with Head Injuries

    ERIC Educational Resources Information Center

    Canto, Angela I.; Chesire, David J.; Buckley, Valerie A.

    2011-01-01

    Students with head injuries may not be as "low incidence" as previously believed. Recent efforts from the American Academy of Pediatrics (2010), the National Football League, and other agencies are attempting to raise awareness of traumatic brain injury (TBI) among students. Along with awareness, there has been an increased publicity effort via…

  11. The development of a non-cryogenic nitrogen/oxygen supply system. [using hydrazine/water electrolysis

    NASA Technical Reports Server (NTRS)

    Greenough, B. M.; Mahan, R. E.

    1974-01-01

    A hydrazine/water electrolysis process system module design was fabricated and tested to demonstrate component and module performance. This module is capable of providing both the metabolic oxygen for crew needs and the oxygen and nitrogen for spacecraft leak makeup. The component designs evolved through previous R and D efforts, and were fabricated and tested individually and then were assembled into a complete module which was successfully tested for 1000 hours to demonstrate integration of the individual components. A survey was made of hydrazine sensor technology and a cell math model was derived.

  12. Further efforts in optimizing nonlinear optical molecules

    NASA Astrophysics Data System (ADS)

    Dirk, Carl W.; Caballero, Noel; Tan, Alarice; Kuzyk, Mark G.; Cheng, Lap-Tak A.; Katz, Howard E.; Shilling, Marcia; King, Lori A.

    1993-02-01

    We summarize some of our past work in the field on optimizing molecules for second order and third order nonlinear optical applications. We also present some previously unpublished results suggesting a particular optimization of the popular cyano- and nitrovinyl acceptor groups. In addition we provide some new quadratic electro-optic results which serve to further verify our choice of a restricted three-level model suitable for optimizing third order nonlinearities in molecules. Finally we present a new squarylium dye with a large third order optical nonlinearity (-9.5 X 10-34 cm7/esu2; EFISH (gamma) at 1906 nm).

  13. The initial impact of a workplace lead-poisoning prevention project.

    PubMed Central

    Bellows, J; Rudolph, L

    1993-01-01

    The California Department of Health Services began an occupational lead poisoning prevention project in cooperation with 275 radiator service companies. The agency developed and marketed resources to facilitate companies' own efforts, tracked the progress of each company, and urged the companies to conduct blood lead testing. Testing by participating employers increased from 9% to 95%, and 10 times as many companies with likely overexposures were identified as had been reported to the state's lead registry in the previous year. The success of this project indicates that the model should be applied more extensively. Images FIGURE 1 PMID:8438981

  14. Models of compacted fine-grained soils used as mineral liner for solid waste

    NASA Astrophysics Data System (ADS)

    Sivrikaya, Osman

    2008-02-01

    To prevent the leakage of pollutant liquids into groundwater and sublayers, the compacted fine-grained soils are commonly utilized as mineral liners or a sealing system constructed under municipal solid waste and other containment hazardous materials. This study presents the correlation equations of the compaction parameters required for construction of a mineral liner system. The determination of the characteristic compaction parameters, maximum dry unit weight ( γ dmax) and optimum water content ( w opt) requires considerable time and great effort. In this study, empirical models are described and examined to find which of the index properties correlate well with the compaction characteristics for estimating γ dmax and w opt of fine-grained soils at the standard compactive effort. The compaction data are correlated with different combinations of gravel content ( G), sand content ( S), fine-grained content (FC = clay + silt), plasticity index ( I p), liquid limit ( w L) and plastic limit ( w P) by performing multilinear regression (MLR) analyses. The obtained correlations with statistical parameters are presented and compared with the previous studies. It is found that the maximum dry unit weight and optimum water content have a considerably good correlation with plastic limit in comparison with liquid limit and plasticity index.

  15. Stability of the body-centred-cubic phase of iron in the Earth's inner core.

    PubMed

    Belonoshko, Anatoly B; Ahuja, Rajeev; Johansson, Börje

    2003-08-28

    Iron is thought to be the main constituent of the Earth's core, and considerable efforts have therefore been made to understand its properties at high pressure and temperature. While these efforts have expanded our knowledge of the iron phase diagram, there remain some significant inconsistencies, the most notable being the difference between the 'low' and 'high' melting curves. Here we report the results of molecular dynamics simulations of iron based on embedded atom models fitted to the results of two implementations of density functional theory. We tested two model approximations and found that both point to the stability of the body-centred-cubic (b.c.c.) iron phase at high temperature and pressure. Our calculated melting curve is in agreement with the 'high' melting curve, but our calculated phase boundary between the hexagonal close packed (h.c.p.) and b.c.c. iron phases is in good agreement with the 'low' melting curve. We suggest that the h.c.p.-b.c.c. transition was previously misinterpreted as a melting transition, similar to the case of xenon, and that the b.c.c. phase of iron is the stable phase in the Earth's inner core.

  16. Privatization and the allure of franchising: a Zambian feasibility study.

    PubMed

    Fiedler, John L; Wight, Jonathan B

    2003-01-01

    Efforts to privatize portions of the health sector have proven more difficult to implement than had been anticipated previously. One common bottleneck encountered has been the traditional organizational structure of the private sector, with its plethora of independent, single physician practices. The atomistic nature of the sector has rendered many privatization efforts difficult, slow and costly-in terms of both organizational development and administration. In many parts of Africa, in particular, the shortages of human and social capital, and the fragile nature of legal institutions, undermine the appeal of privatization. The private sector is left with inefficiencies, high prices and costs, and a reduced effective demand. The result is the simultaneous existence of excess capacity and unmet need. One potential method to improve the efficiency of the private sector, and thereby enhance the likelihood of successful privatization, is to transfer managerial technology--via franchising--from models that have proven successful elsewhere. This paper presents a feasibility analysis of franchizing the successful Bolivian PROSALUD system's management package to Zambia. The assessment, based on PROSALUD's financial model, demonstrates that technology transfer requires careful adaptation to local conditions and, in this instance, would still require significant external assistance.

  17. Molecular Imaging of Experimental Abdominal Aortic Aneurysms

    PubMed Central

    Ramaswamy, Aneesh K.; Hamilton, Mark; Joshi, Rucha V.; Kline, Benjamin P.; Li, Rui; Wang, Pu; Goergen, Craig J.

    2013-01-01

    Current laboratory research in the field of abdominal aortic aneurysm (AAA) disease often utilizes small animal experimental models induced by genetic manipulation or chemical application. This has led to the use and development of multiple high-resolution molecular imaging modalities capable of tracking disease progression, quantifying the role of inflammation, and evaluating the effects of potential therapeutics. In vivo imaging reduces the number of research animals used, provides molecular and cellular information, and allows for longitudinal studies, a necessity when tracking vessel expansion in a single animal. This review outlines developments of both established and emerging molecular imaging techniques used to study AAA disease. Beyond the typical modalities used for anatomical imaging, which include ultrasound (US) and computed tomography (CT), previous molecular imaging efforts have used magnetic resonance (MR), near-infrared fluorescence (NIRF), bioluminescence, single-photon emission computed tomography (SPECT), and positron emission tomography (PET). Mouse and rat AAA models will hopefully provide insight into potential disease mechanisms, and the development of advanced molecular imaging techniques, if clinically useful, may have translational potential. These efforts could help improve the management of aneurysms and better evaluate the therapeutic potential of new treatments for human AAA disease. PMID:23737735

  18. The Effect of the Demand Control and Effort Reward Imbalance Models on the Academic Burnout of Korean Adolescents

    ERIC Educational Resources Information Center

    Lee, Jayoung; Puig, Ana; Lee, Sang Min

    2012-01-01

    The purpose of this study was to examine the effects of the Demand Control Model (DCM) and the Effort Reward Imbalance Model (ERIM) on academic burnout for Korean students. Specifically, this study identified the effects of the predictor variables based on DCM and ERIM (i.e., demand, control, effort, reward, Demand Control Ratio, Effort Reward…

  19. Assessing patterns of human-wildlife conflicts and compensation around a Central Indian protected area.

    PubMed

    Karanth, Krithi K; Gopalaswamy, Arjun M; DeFries, Ruth; Ballal, Natasha

    2012-01-01

    Mitigating crop and livestock loss to wildlife and improving compensation distribution are important for conservation efforts in landscapes where people and wildlife co-occur outside protected areas. The lack of rigorously collected spatial data poses a challenge to management efforts to minimize loss and mitigate conflicts. We surveyed 735 households from 347 villages in a 5154 km(2) area surrounding Kanha Tiger Reserve in India. We modeled self-reported household crop and livestock loss as a function of agricultural, demographic and environmental factors, and mitigation measures. We also modeled self-reported compensation received by households as a function of demographic factors, conflict type, reporting to authorities, and wildlife species involved. Seventy-three percent of households reported crop loss and 33% livestock loss in the previous year, but less than 8% reported human injury or death. Crop loss was associated with greater number of cropping months per year and proximity to the park. Livestock loss was associated with grazing animals inside the park and proximity to the park. Among mitigation measures only use of protective physical structures were associated with reduced livestock loss. Compensation distribution was more likely for tiger related incidents, and households reporting loss and located in the buffer. Average estimated probability of crop loss was 0.93 and livestock loss was 0.60 for surveyed households. Estimated crop and livestock loss and compensation distribution were higher for households located inside the buffer. Our approach modeled conflict data to aid managers in identifying potential conflict hotspots, influential factors, and spatially maps risk probability of crop and livestock loss. This approach could help focus allocation of conservation efforts and funds directed at conflict prevention and mitigation where high densities of people and wildlife co-occur.

  20. Providing competency-based family medicine residency training in substance abuse in the new millennium: a model curriculum.

    PubMed

    Seale, J Paul; Shellenberger, Sylvia; Clark, Denice Crowe

    2010-05-11

    This article, developed for the Betty Ford Institute Consensus Conference on Graduate Medical Education (December, 2008), presents a model curriculum for Family Medicine residency training in substance abuse. The authors reviewed reports of past Family Medicine curriculum development efforts, previously-identified barriers to education in high risk substance use, approaches to overcoming these barriers, and current training guidelines of the Accreditation Council for Graduate Medical Education (ACGME) and their Family Medicine Residency Review Committee. A proposed eight-module curriculum was developed, based on substance abuse competencies defined by Project MAINSTREAM and linked to core competencies defined by the ACGME. The curriculum provides basic training in high risk substance use to all residents, while also addressing current training challenges presented by U.S. work hour regulations, increasing international diversity of Family Medicine resident trainees, and emerging new primary care practice models. This paper offers a core curriculum, focused on screening, brief intervention and referral to treatment, which can be adapted by residency programs to meet their individual needs. The curriculum encourages direct observation of residents to ensure that core skills are learned and trains residents with several "new skills" that will expand the basket of substance abuse services they will be equipped to provide as they enter practice. Broad-based implementation of a comprehensive Family Medicine residency curriculum should increase the ability of family physicians to provide basic substance abuse services in a primary care context. Such efforts should be coupled with faculty development initiatives which ensure that sufficient trained faculty are available to teach these concepts and with efforts by major Family Medicine organizations to implement and enforce residency requirements for substance abuse training.

  1. Assessing Patterns of Human-Wildlife Conflicts and Compensation around a Central Indian Protected Area

    PubMed Central

    Karanth, Krithi K.; Gopalaswamy, Arjun M.; DeFries, Ruth; Ballal, Natasha

    2012-01-01

    Mitigating crop and livestock loss to wildlife and improving compensation distribution are important for conservation efforts in landscapes where people and wildlife co-occur outside protected areas. The lack of rigorously collected spatial data poses a challenge to management efforts to minimize loss and mitigate conflicts. We surveyed 735 households from 347 villages in a 5154 km2 area surrounding Kanha Tiger Reserve in India. We modeled self-reported household crop and livestock loss as a function of agricultural, demographic and environmental factors, and mitigation measures. We also modeled self-reported compensation received by households as a function of demographic factors, conflict type, reporting to authorities, and wildlife species involved. Seventy-three percent of households reported crop loss and 33% livestock loss in the previous year, but less than 8% reported human injury or death. Crop loss was associated with greater number of cropping months per year and proximity to the park. Livestock loss was associated with grazing animals inside the park and proximity to the park. Among mitigation measures only use of protective physical structures were associated with reduced livestock loss. Compensation distribution was more likely for tiger related incidents, and households reporting loss and located in the buffer. Average estimated probability of crop loss was 0.93 and livestock loss was 0.60 for surveyed households. Estimated crop and livestock loss and compensation distribution were higher for households located inside the buffer. Our approach modeled conflict data to aid managers in identifying potential conflict hotspots, influential factors, and spatially maps risk probability of crop and livestock loss. This approach could help focus allocation of conservation efforts and funds directed at conflict prevention and mitigation where high densities of people and wildlife co-occur. PMID:23227173

  2. The Neural Correlates of Emotion Regulation by Implementation Intentions

    PubMed Central

    Hallam, Glyn P.; Webb, Thomas L.; Sheeran, Paschal; Miles, Eleanor; Wilkinson, Iain D.; Hunter, Michael D.; Barker, Anthony T.; Woodruff, Peter W. R.; Totterdell, Peter; Lindquist, Kristen A.; Farrow, Tom F. D.

    2015-01-01

    Several studies have investigated the neural basis of effortful emotion regulation (ER) but the neural basis of automatic ER has been less comprehensively explored. The present study investigated the neural basis of automatic ER supported by ‘implementation intentions’. 40 healthy participants underwent fMRI while viewing emotion-eliciting images and used either a previously-taught effortful ER strategy, in the form of a goal intention (e.g., try to take a detached perspective), or a more automatic ER strategy, in the form of an implementation intention (e.g., “If I see something disgusting, then I will think these are just pixels on the screen!”), to regulate their emotional response. Whereas goal intention ER strategies were associated with activation of brain areas previously reported to be involved in effortful ER (including dorsolateral prefrontal cortex), ER strategies based on an implementation intention strategy were associated with activation of right inferior frontal gyrus and ventro-parietal cortex, which may reflect the attentional control processes automatically captured by the cue for action contained within the implementation intention. Goal intentions were also associated with less effective modulation of left amygdala, supporting the increased efficacy of ER under implementation intention instructions, which showed coupling of orbitofrontal cortex and amygdala. The findings support previous behavioural studies in suggesting that forming an implementation intention enables people to enact goal-directed responses with less effort and more efficiency. PMID:25798822

  3. Fusion Simulation Project Workshop Report

    NASA Astrophysics Data System (ADS)

    Kritz, Arnold; Keyes, David

    2009-03-01

    The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.

  4. A Model-Based Approach for Identifying Signatures of Ancient Balancing Selection in Genetic Data

    PubMed Central

    DeGiorgio, Michael; Lohmueller, Kirk E.; Nielsen, Rasmus

    2014-01-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates. PMID:25144706

  5. A model-based approach for identifying signatures of ancient balancing selection in genetic data.

    PubMed

    DeGiorgio, Michael; Lohmueller, Kirk E; Nielsen, Rasmus

    2014-08-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.

  6. Theoretical Understanding the Relations of Melting-point Determination Methods from Gibbs Thermodynamic Surface and Applications on Melting Curves of Lower Mantle Minerals

    NASA Astrophysics Data System (ADS)

    Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.

    2016-12-01

    The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.

  7. Emissions from ships in the northwestern United States.

    PubMed

    Corbett, James J

    2002-03-15

    Recent inventory efforts have focused on developing nonroad inventories for emissions modeling and policy insights. Characterizing these inventories geographically and explicitly treating the uncertaintiesthat result from limited emissions testing, incomplete activity and usage data, and other important input parameters currently pose the largest methodological challenges. This paper presents a commercial marine vessel (CMV) emissions inventory for Washington and Oregon using detailed statistics regarding fuel consumption, vessel movements, and cargo volumes for the Columbia and Snake River systems. The inventory estimates emissions for oxides of nitrogen (NOx), particulate matter (PM), and oxides of sulfur (SOx). This analysis estimates that annual NOx emissions from marine transportation in the Columbia and Snake River systems in Washington and Oregon equal 6900 t of NOx (as NO2) per year, 2.6 times greater than previous NO, inventories for this region. Statewide CMV NO, emissions are estimated to be 9,800 t of NOx per year. By relying on a "bottom-up" fuel consumption model that includes vessel characteristics and transit information, the river system inventory may be more accurate than previous estimates. This inventory provides modelers with bounded parametric inputs for sensitivity analysis in pollution modeling. The ability to parametrically model the uncertainty in commercial marine vessel inventories also will help policy-makers determine whether better policy decisions can be enabled through further vessel testing and improved inventory resolution.

  8. TRANSPORT BY MERIDIONAL CIRCULATIONS IN SOLAR-TYPE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, T. S.; Brummell, N. H., E-mail: tsw25@soe.ucsc.edu

    2012-08-20

    Transport by meridional flows has significant consequences for stellar evolution, but is difficult to capture in global-scale numerical simulations because of the wide range of timescales involved. Stellar evolution models therefore usually adopt parameterizations for such transport based on idealized laminar or mean-field models. Unfortunately, recent attempts to model this transport in global simulations have produced results that are not consistent with any of these idealized models. In an effort to explain the discrepancies between global simulations and idealized models, here we use three-dimensional local Cartesian simulations of compressible convection to study the efficiency of transport by meridional flows belowmore » a convection zone in several parameter regimes of relevance to the Sun and solar-type stars. In these local simulations we are able to establish the correct ordering of dynamical timescales, although the separation of the timescales remains unrealistic. We find that, even though the generation of internal waves by convective overshoot produces a high degree of time dependence in the meridional flow field, the mean flow has the qualitative behavior predicted by laminar, 'balanced' models. In particular, we observe a progressive deepening, or 'burrowing', of the mean circulation if the local Eddington-Sweet timescale is shorter than the viscous diffusion timescale. Such burrowing is a robust prediction of laminar models in this parameter regime, but has never been observed in any previous numerical simulation. We argue that previous simulations therefore underestimate the transport by meridional flows.« less

  9. A Parameterized Model of Amylopectin Synthesis Provides Key Insights into the Synthesis of Granular Starch

    PubMed Central

    Wu, Alex Chi; Morell, Matthew K.; Gilbert, Robert G.

    2013-01-01

    A core set of genes involved in starch synthesis has been defined by genetic studies, but the complexity of starch biosynthesis has frustrated attempts to elucidate the precise functional roles of the enzymes encoded. The chain-length distribution (CLD) of amylopectin in cereal endosperm is modeled here on the basis that the CLD is produced by concerted actions of three enzyme types: starch synthases, branching and debranching enzymes, including their respective isoforms. The model, together with fitting to experiment, provides four key insights. (1) To generate crystalline starch, defined restrictions on particular ratios of enzymatic activities apply. (2) An independent confirmation of the conclusion, previously reached solely from genetic studies, of the absolute requirement for debranching enzyme in crystalline amylopectin synthesis. (3) The model provides a mechanistic basis for understanding how successive arrays of crystalline lamellae are formed, based on the identification of two independent types of long amylopectin chains, one type remaining in the amorphous lamella, while the other propagates into, and is integral to the formation of, an adjacent crystalline lamella. (4) The model provides a means by which a small number of key parameters defining the core enzymatic activities can be derived from the amylopectin CLD, providing the basis for focusing studies on the enzymatic requirements for generating starches of a particular structure. The modeling approach provides both a new tool to accelerate efforts to understand granular starch biosynthesis and a basis for focusing efforts to manipulate starch structure and functionality using a series of testable predictions based on a robust mechanistic framework. PMID:23762422

  10. Slab1.0: A three-dimensional model of global subduction zone geometries

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.; Wald, David J.; Johnson, Rebecca L.

    2012-01-01

    We describe and present a new model of global subduction zone geometries, called Slab1.0. An extension of previous efforts to constrain the two-dimensional non-planar geometry of subduction zones around the focus of large earthquakes, Slab1.0 describes the detailed, non-planar, three-dimensional geometry of approximately 85% of subduction zones worldwide. While the model focuses on the detailed form of each slab from their trenches through the seismogenic zone, where it combines data sets from active source and passive seismology, it also continues to the limits of their seismic extent in the upper-mid mantle, providing a uniform approach to the definition of the entire seismically active slab geometry. Examples are shown for two well-constrained global locations; models for many other regions are available and can be freely downloaded in several formats from our new Slab1.0 website, http://on.doi.gov/d9ARbS. We describe improvements in our two-dimensional geometry constraint inversion, including the use of `average' active source seismic data profiles in the shallow trench regions where data are otherwise lacking, derived from the interpolation between other active source seismic data along-strike in the same subduction zone. We include several analyses of the uncertainty and robustness of our three-dimensional interpolation methods. In addition, we use the filtered, subduction-related earthquake data sets compiled to build Slab1.0 in a reassessment of previous analyses of the deep limit of the thrust interface seismogenic zone for all subduction zones included in our global model thus far, concluding that the width of these seismogenic zones is on average 30% larger than previous studies have suggested.

  11. Modeling Change in Effort across a Low-Stakes Testing Session: A Latent Growth Curve Modeling Approach

    ERIC Educational Resources Information Center

    Barry, Carol L.; Finney, Sara J.

    2016-01-01

    We examined change in test-taking effort over the course of a three-hour, five test, low-stakes testing session. Latent growth modeling results indicated that change in test-taking effort was well-represented by a piecewise growth form, wherein effort increased from test 1 to test 4 and then decreased from test 4 to test 5. There was significant…

  12. Implications of different digital elevation models and preprocessing techniques to delineate debris flow inundation hazard zones in El Salvador

    NASA Astrophysics Data System (ADS)

    Anderson, E. R.; Griffin, R.; Irwin, D.

    2013-12-01

    Heavy rains and steep, volcanic slopes in El Salvador cause numerous landslides every year, posing a persistent threat to the population, economy and environment. Although potential debris inundation hazard zones have been delineated using digital elevation models (DEMs), some disparities exist between the simulated zones and actual affected areas. Moreover, these hazard zones have only been identified for volcanic lahars and not the shallow landslides that occur nearly every year. This is despite the availability of tools to delineate a variety of landslide types (e.g., the USGS-developed LAHARZ software). Limitations in DEM spatial resolution, age of the data, and hydrological preprocessing techniques can contribute to inaccurate hazard zone definitions. This study investigates the impacts of using different elevation models and pit filling techniques in the final debris hazard zone delineations, in an effort to determine which combination of methods most closely agrees with observed landslide events. In particular, a national DEM digitized from topographic sheets from the 1970s and 1980s provide an elevation product at a 10 meter resolution. Both natural and anthropogenic modifications of the terrain limit the accuracy of current landslide hazard assessments derived from this source. Global products from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global DEM (ASTER GDEM) offer more recent data but at the cost of spatial resolution. New data derived from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) in 2013 provides the opportunity to update hazard zones at a higher spatial resolution (approximately 6 meters). Hydrological filling of sinks or pits for current hazard zone simulation has previously been achieved through ArcInfo spatial analyst. Such hydrological processing typically only fills pits and can lead to drastic modifications of original elevation values. Optimized pit filling techniques use both cut and fill operations to minimize modifications of the original DEM. Satellite image interpretation and field surveying provide the baseline upon which to test the accuracy of each model simulation. By outlining areas that could potentially be inundated by debris flows, these efforts can be used to more accurately identify the places and assets immediately exposed to landslide hazards. We contextualize the results of the previous and ongoing efforts into how they may be incorporated into decision support systems. We also discuss if and how these analyses would have provided additional knowledge in the past, and identify specific recommendations as to how they could contribute to a more robust decision support system in the future.

  13. San Joaquin River Up-Stream DO TMDL Project Task 4: MonitoringStudy Interim Task Report #3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stringfellow, William; Borglin, Sharon; Dahlgren, Randy

    2007-03-30

    The purpose of the Dissolved Oxygen Total Maximum Daily LoadProject (DO TMDLProject) is to provide a comprehensive understanding ofthe sources and fate of oxygen consuming materials in the San JoaquinRiver (SJR) watershed between Channel Point and Lander Avenue (upstreamSJR). When completed, this study will provide the stakeholders anunderstanding of the baseline conditions of the basin, provide input foran allocation decision, and provide the stakeholders with a tool formeasuring the impact of any waterquality management program that may beimplemented as part of the DO TMDL process. Previous studies haveidentified algal biomass as the most significant oxygen-demandingsubstance in the DO TMDL Projectmore » study-area between of Channel Point andLander Ave onthe SJR. Other oxygen-demanding substances found in theupstream SJR include ammonia and organic carbon from sources other thanalgae. The DO TMDL Project study-area contains municipalities, dairies,wetlands, cattle ranching, irrigated agriculture, and industries thatcould potentially contribute biochemical oxygen demand (BOD) to the SJR.This study is designed to discriminate between algal BOD and othersources of BOD throughout the entire upstream SJR watershed. Algalbiomass is not a conserved substance, but grows and decays in the SJR;hence, characterization of oxygen-demanding substances in the SJR isinherently complicated and requires an integrated effort of extensivemonitoring, scientific study, and modeling. In order to achieve projectobjectives, project activities were divided into a number of Tasks withspecific goals and objectives. In this report, we present the results ofmonitoring and research conducted under Task 4 of the DO TMDL Project.The major objective of Task 4 is to collect sufficient hydrologic (flow)and water quality (WQ) data to characterize the loading of algae, otheroxygen-demanding materials, and nutrients fromindividual tributaries andsub-watersheds of the upstream SJR between Mossdale and Lander Avenue.This data is specifically being collected to provide data for the Task 6Modeling effort. Task 4 provides input and calibration data for flow andWQ modeling associated with the low DO problems in the SJR watershed,including modeling on the linkage among nutrients, algae, and low DO.Task 4 is providing a higher volume of high quality and coherent data tothe modeling team than was available in the past for the upstream SJR.The monitoring and research activities under Task 4 are integrated withthe Modeling effort (Task 6) and are not designed to be a stand aloneprogram. Although, the majority of analysis of the Task 4 data isoccurring as part of the Task 6 Modeling program, analysis of Task 4 dataindependently of the modeling effort is also an important component ofthe DO TMDL Project effort. In this report, we present the results ofmonitoring and research conducted under Task 4. The major purposes ofthis report are to 1) document activities undertaken as part of theDOTMDL Project; 2) organize electronic data for delivery to Stateagencies, stakeholders and principal investigators (cooperators) on theDO TMDL Project; 3) provide a summary analysis of the data for referenceand to assist stakeholders in planning watershed activities inresponse tothe DO TMDL requirements; and 5) provide a preliminary scientificinterpretation independently of the Task 6 Modeling effort. Due to theextensive scope of theTask 4 portion of the DO TMDL Project, the Task 4March 2007 Interim Report is divided into a numbers of chapters andassociated appendixes designed to be able to stand1-3 independently ofeach other. The purpose of this chapter is to provide an overview of Task4 data collection and to explain the structure of the overallreport.« less

  14. 77 FR 67657 - Request for Public Comment: 30-Day Proposed Information Collection: Indian Health Service (IHS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-13

    ... Practice, and Local Effort (BPPPLE) Form.'' Need and Use of Information Collection: The IHS goal is to.../Disease Prevention, Nursing, and Dental) have developed a centralized program database of best practices, promising Practices and local efforts and resources. This database was previously referred as OSCAR, but the...

  15. Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs

    ERIC Educational Resources Information Center

    Bijleveld, Erik; Custers, Ruud; Aarts, Henk

    2010-01-01

    While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…

  16. Identifying Predictors of Achievement in the Newly Defined Information Literacy: A Neural Network Analysis

    ERIC Educational Resources Information Center

    Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.

    2009-01-01

    Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…

  17. THE McGill PLANAR HYDROGEN ATMOSPHERE CODE (McPHAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.

    2012-04-10

    The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation, at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; andmore » (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to {approx}1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to {approx}<1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T{sub eff} < 10{sup 5.6} K, though even there it may not be of much practical importance for most observations.« less

  18. The McGill Planar Hydrogen Atmosphere Code (McPHAC)

    NASA Astrophysics Data System (ADS)

    Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.; Rutledge, Robert E.

    2012-04-01

    The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation, at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.

  19. McPHAC: McGill Planar Hydrogen Atmosphere Code

    NASA Astrophysics Data System (ADS)

    Haakonsen, Christian Bernt; Turner, Monica L.; Tacik, Nick A.; Rutledge, Robert E.

    2012-10-01

    The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.

  20. Measurement of the bystander intervention model for bullying and sexual harassment.

    PubMed

    Nickerson, Amanda B; Aloe, Ariel M; Livingston, Jennifer A; Feeley, Thomas Hugh

    2014-06-01

    Although peer bystanders can exacerbate or prevent bullying and sexual harassment, research has been hindered by the absence of a validated assessment tool to measure the process and sequential steps of the bystander intervention model. A measure was developed based on the five steps of Latané and Darley's (1970) bystander intervention model applied to bullying and sexual harassment. Confirmatory factor analysis with a sample of 562 secondary school students confirmed the five-factor structure of the measure. Structural equation modeling revealed that all the steps were influenced by the previous step in the model, as the theory proposed. In addition, the bystander intervention measure was positively correlated with empathy, attitudes toward bullying and sexual harassment, and awareness of bullying and sexual harassment facts. This measure can be used for future research and to inform intervention efforts related to the process of bystander intervention for bullying and sexual harassment. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  1. Effects of model structural uncertainty on carbon cycle projections: biological nitrogen fixation as a case study

    NASA Astrophysics Data System (ADS)

    Wieder, William R.; Cleveland, Cory C.; Lawrence, David M.; Bonan, Gordon B.

    2015-04-01

    Uncertainties in terrestrial carbon (C) cycle projections increase uncertainty of potential climate feedbacks. Efforts to improve model performance often include increased representation of biogeochemical processes, such as coupled carbon-nitrogen (N) cycles. In doing so, models are becoming more complex, generating structural uncertainties in model form that reflect incomplete knowledge of how to represent underlying processes. Here, we explore structural uncertainties associated with biological nitrogen fixation (BNF) and quantify their effects on C cycle projections. We find that alternative plausible structures to represent BNF result in nearly equivalent terrestrial C fluxes and pools through the twentieth century, but the strength of the terrestrial C sink varies by nearly a third (50 Pg C) by the end of the twenty-first century under a business-as-usual climate change scenario representative concentration pathway 8.5. These results indicate that actual uncertainty in future C cycle projections may be larger than previously estimated, and this uncertainty will limit C cycle projections until model structures can be evaluated and refined.

  2. An improved model for the Earth's gravity field

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Shum, C. K.; Yuan, D. N.; Ries, J. C.; Schutz, B. E.

    1989-01-01

    An improved model for the Earth's gravity field, TEG-1, was determined using data sets from fourteen satellites, spanning the inclination ranges from 15 to 115 deg, and global surface gravity anomaly data. The satellite measurements include laser ranging data, Doppler range-rate data, and satellite-to-ocean radar altimeter data measurements, which include the direct height measurement and the differenced measurements at ground track crossings (crossover measurements). Also determined was another gravity field model, TEG-1S, which included all the data sets in TEG-1 with the exception of direct altimeter data. The effort has included an intense scrutiny of the gravity field solution methodology. The estimated parameters included geopotential coefficients complete to degree and order 50 with selected higher order coefficients, ocean and solid Earth tide parameters, Doppler tracking station coordinates and the quasi-stationary sea surface topography. Extensive error analysis and calibration of the formal covariance matrix indicate that the gravity field model is a significant improvement over previous models and can be used for general applications in geodesy.

  3. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    NASA Astrophysics Data System (ADS)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  4. Safety behavior: Job demands, job resources, and perceived management commitment to safety.

    PubMed

    Hansez, Isabelle; Chmiel, Nik

    2010-07-01

    The job demands-resources model posits that job demands and resources influence outcomes through job strain and work engagement processes. We test whether the model can be extended to effort-related "routine" safety violations and "situational" safety violations provoked by the organization. In addition we test more directly the involvement of job strain than previous studies which have used burnout measures. Structural equation modeling provided, for the first time, evidence of predicted relationships between job strain and "routine" violations and work engagement with "routine" and "situational" violations, thereby supporting the extension of the job demands-resources model to safety behaviors. In addition our results showed that a key safety-specific construct 'perceived management commitment to safety' added to the explanatory power of the job demands-resources model. A predicted path from job resources to perceived management commitment to safety was highly significant, supporting the view that job resources can influence safety behavior through both general motivational involvement in work (work engagement) and through safety-specific processes.

  5. Predicting Readmission at Early Hospitalization Using Electronic Clinical Data: An Early Readmission Risk Score.

    PubMed

    Tabak, Ying P; Sun, Xiaowu; Nunez, Carlos M; Gupta, Vikas; Johannes, Richard S

    2017-03-01

    Identifying patients at high risk for readmission early during hospitalization may aid efforts in reducing readmissions. We sought to develop an early readmission risk predictive model using automated clinical data available at hospital admission. We developed an early readmission risk model using a derivation cohort and validated the model with a validation cohort. We used a published Acute Laboratory Risk of Mortality Score as an aggregated measure of clinical severity at admission and the number of hospital discharges in the previous 90 days as a measure of disease progression. We then evaluated the administrative data-enhanced model by adding principal and secondary diagnoses and other variables. We examined the c-statistic change when additional variables were added to the model. There were 1,195,640 adult discharges from 70 hospitals with 39.8% male and the median age of 63 years (first and third quartile: 43, 78). The 30-day readmission rate was 11.9% (n=142,211). The early readmission model yielded a graded relationship of readmission and the Acute Laboratory Risk of Mortality Score and the number of previous discharges within 90 days. The model c-statistic was 0.697 with good calibration. When administrative variables were added to the model, the c-statistic increased to 0.722. Automated clinical data can generate a readmission risk score early at hospitalization with fair discrimination. It may have applied value to aid early care transition. Adding administrative data increases predictive accuracy. The administrative data-enhanced model may be used for hospital comparison and outcome research.

  6. Subunit architecture and functional modular rearrangements of the transcriptional Mediator complex

    PubMed Central

    Tsai, Kuang-Lei; Tomomori-Sato, Chieri; Sato, Shigeo; Conaway, Ronald C.; Conaway, Joan W.; Asturias, Francisco J.

    2014-01-01

    SUMMARY The multisubunit Mediator comprising ~30 distinct proteins, plays an essential role in gene expression regulation by acting as a bridge between DNA binding transcription factors and the RNA polymerase II (RNAPII) transcription machinery. Efforts to uncover the Mediator mechanism have been hindered by a poor understanding of its structure, subunit organization, and conformational rearrangements. By overcoming biochemical and image analysis hurdles, we obtained accurate EM structures of yeast and human Mediators. Subunit localization experiments, docking of partial X-ray structures, and biochemical analyses resulted in comprehensive mapping of yeast Mediator subunits and a complete reinterpretation of our previous Mediator organization model. Large-scale Mediator rearrangements depend on changes at the interfaces between previously described Mediator modules, which appear to be facilitated by factors conducive to transcription initiation. Conservation across eukaryotes of Mediator structure, subunit organization, and RNA polymerase II interaction suggest conservation of fundamental aspects of the Mediator mechanism. PMID:24882805

  7. LANDSAT data for state planning. [of transportation for Georgia

    NASA Technical Reports Server (NTRS)

    Faust, N. L.; Spann, G. W.

    1975-01-01

    The results of an effort to generate and apply automated classification of LANDSAT digital data to state of Georgia problems are presented. This phase centers on an analysis of the usefulness of LANDSAT digital data to provide land-use data for transportation planning. Hall County, Georgia was chosen as a test site because it is part of a seventeen county area for which the Georgia Department of Transportation is currently designing a Transportation Planning Land-Use Simulation Model. The land-cover information derived from this study was compared to several other existing sources of land-use data for Hall County and input into this simulation. The results indicate that there is difficulty comparing LANDSAT derived land-cover information with previous land-use information since the LANDSAT data are acquired on an acre by acre grid basis while all previous land-use surveys for Hall County used land-use data on a parcel basis.

  8. Arteriviruses, Pegiviruses, and Lentiviruses Are Common among Wild African Monkeys.

    PubMed

    Bailey, Adam L; Lauck, Michael; Ghai, Ria R; Nelson, Chase W; Heimbruch, Katelyn; Hughes, Austin L; Goldberg, Tony L; Kuhn, Jens H; Jasinska, Anna J; Freimer, Nelson B; Apetrei, Cristian; O'Connor, David H

    2016-08-01

    Nonhuman primates (NHPs) are a historically important source of zoonotic viruses and are a gold-standard model for research on many human pathogens. However, with the exception of simian immunodeficiency virus (SIV) (family Retroviridae), the blood-borne viruses harbored by these animals in the wild remain incompletely characterized. Here, we report the discovery and characterization of two novel simian pegiviruses (family Flaviviridae) and two novel simian arteriviruses (family Arteriviridae) in wild African green monkeys from Zambia (malbroucks [Chlorocebus cynosuros]) and South Africa (vervet monkeys [Chlorocebus pygerythrus]). We examine several aspects of infection, including viral load, genetic diversity, evolution, and geographic distribution, as well as host factors such as age, sex, and plasma cytokines. In combination with previous efforts to characterize blood-borne RNA viruses in wild primates across sub-Saharan Africa, these discoveries demonstrate that in addition to SIV, simian pegiviruses and simian arteriviruses are widespread and prevalent among many African cercopithecoid (i.e., Old World) monkeys. Primates are an important source of viruses that infect humans and serve as an important laboratory model of human virus infection. Here, we discover two new viruses in African green monkeys from Zambia and South Africa. In combination with previous virus discovery efforts, this finding suggests that these virus types are widespread among African monkeys. Our analysis suggests that one of these virus types, the simian arteriviruses, may have the potential to jump between different primate species and cause disease. In contrast, the other virus type, the pegiviruses, are thought to reduce the disease caused by human immunodeficiency virus (HIV) in humans. However, we did not observe a similar protective effect in SIV-infected African monkeys coinfected with pegiviruses, possibly because SIV causes little to no disease in these hosts. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  9. Arteriviruses, Pegiviruses, and Lentiviruses Are Common among Wild African Monkeys

    PubMed Central

    Bailey, Adam L.; Lauck, Michael; Ghai, Ria R.; Nelson, Chase W.; Heimbruch, Katelyn; Hughes, Austin L.; Goldberg, Tony L.; Jasinska, Anna J.; Freimer, Nelson B.; Apetrei, Cristian

    2016-01-01

    ABSTRACT Nonhuman primates (NHPs) are a historically important source of zoonotic viruses and are a gold-standard model for research on many human pathogens. However, with the exception of simian immunodeficiency virus (SIV) (family Retroviridae), the blood-borne viruses harbored by these animals in the wild remain incompletely characterized. Here, we report the discovery and characterization of two novel simian pegiviruses (family Flaviviridae) and two novel simian arteriviruses (family Arteriviridae) in wild African green monkeys from Zambia (malbroucks [Chlorocebus cynosuros]) and South Africa (vervet monkeys [Chlorocebus pygerythrus]). We examine several aspects of infection, including viral load, genetic diversity, evolution, and geographic distribution, as well as host factors such as age, sex, and plasma cytokines. In combination with previous efforts to characterize blood-borne RNA viruses in wild primates across sub-Saharan Africa, these discoveries demonstrate that in addition to SIV, simian pegiviruses and simian arteriviruses are widespread and prevalent among many African cercopithecoid (i.e., Old World) monkeys. IMPORTANCE Primates are an important source of viruses that infect humans and serve as an important laboratory model of human virus infection. Here, we discover two new viruses in African green monkeys from Zambia and South Africa. In combination with previous virus discovery efforts, this finding suggests that these virus types are widespread among African monkeys. Our analysis suggests that one of these virus types, the simian arteriviruses, may have the potential to jump between different primate species and cause disease. In contrast, the other virus type, the pegiviruses, are thought to reduce the disease caused by human immunodeficiency virus (HIV) in humans. However, we did not observe a similar protective effect in SIV-infected African monkeys coinfected with pegiviruses, possibly because SIV causes little to no disease in these hosts. PMID:27170760

  10. The cost of model reference adaptive control - Analysis, experiments, and optimization

    NASA Technical Reports Server (NTRS)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1993-01-01

    In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.

  11. Updates to watershed modeling in the Potholes Reservoir basin, Washington-a supplement to Scientific Investigation Report 2009-5081

    USGS Publications Warehouse

    Mastin, Mark

    2012-01-01

    A previous collaborative effort between the U.S. Geological Survey and the Bureau of Reclamation resulted in a watershed model for four watersheds that discharge into Potholes Reservoir, Washington. Since the model was constructed, two new meteorological sites have been established that provide more reliable real-time information. The Bureau of Reclamation was interested in incorporating this new information into the existing watershed model developed in 2009, and adding measured snowpack information to update simulated results and to improve forecasts of runoff. This report includes descriptions of procedures to aid a user in making model runs, including a description of the Object User Interface for the watershed model with details on specific keystrokes to generate model runs for the contributing basins. A new real-time, data-gathering computer program automates the creation of the model input files and includes the new meteorological sites. The 2009 watershed model was updated with the new sites and validated by comparing simulated results to measured data. As in the previous study, the updated model (2012 model) does a poor job of simulating individual storms, but a reasonably good job of simulating seasonal runoff volumes. At three streamflow-gaging stations, the January 1 to June 30 retrospective forecasts of runoff volume for years 2010 and 2011 were within 40 percent of the measured runoff volume for five of the six comparisons, ranging from -39.4 to 60.3 percent difference. A procedure for collecting measured snowpack data and using the data in the watershed model for forecast model runs, based on the Ensemble Streamflow Prediction method, is described, with an example that uses 2004 snow-survey data.

  12. LRO-LAMP Observations of Illumination Conditions in the Lunar South Pole

    NASA Astrophysics Data System (ADS)

    Mandt, K.; Greathouse, T. K.; Retherford, K. D.; Mazarico, E.; Gladstone, R.; Liu, Y.; Hendrix, A.; Hurley, D.; Lemelin, M.; Patterson, G. W.; Bowman-Cisneros, E.

    2016-12-01

    The south pole of the Moon is an area of great interest for space exploration and scientific research, because many low-lying regions are permanently shaded while adjacent topographic highs experience near constant sunlight. The lack of direct sunlight in permanently shaded regions (PSRs) provides cold enough conditions for them to potentially trap and retain large quantities of volatiles in their soils, while the locations that receive extended periods of sunlight could provide a reliable source of solar energy and relatively stable temperature conditions. Illumination conditions at the lunar south pole vary diurnally and seasonally, but on different timescales than days and seasons on the Earth. The most important advancements in understanding illumination conditions at the poles are provided by topographic mapping and illumination modeling. These efforts have provided estimates of the extent of PSRs and the percent of time that sunlit peaks are illuminated. They also help to constrain the thermal balance of the PSRs based on other sources of illumination. However, comparing model results with spacecraft observations can help to validate the models and provides ground truth for planning future exploration efforts. We have developed a new method for observing illumination conditions at the south pole using data taken by the LRO Lyman Alpha Mapping Project (LAMP), a far ultraviolet (FUV) imaging spectrograph. LAMP produces maps of the albedo of the upper 25-100 nm of lunar regolith using measurements of the brightness of reflected light relative to known light sources in daytime and nighttime conditions. Nighttime observations have been used previously to determine the abundance of surface frost within the PSRs and the surface porosity of regolith within the PSRs. The maps that have been used for these studies excluded scattered sunlight by restricting observations to nighttime conditions when the solar zenith angle is greater than 91°. However, by producing maps of the PSRs using data that was excluded from these previous studies we are able to observe scattering of far-UV sunlight at night within the PSRs.

  13. The Effort Paradox: Effort Is Both Costly and Valued.

    PubMed

    Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y

    2018-04-01

    According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Moving Beyond Misconceptions: A New Model for Learning Challenges in Cognition

    NASA Astrophysics Data System (ADS)

    Slater, T. F.; Slater, S. J.

    2011-12-01

    For over 40 years, the science education community has given its attention to cataloging the substantial body of "misconceptions" in individual's thinking about science, and to addressing the consequences of those misconceptions in the science classroom. Despite the tremendous amount of effort given to researching and disseminating information related to misconceptions, and the development of a theory of conceptual change to mitigate misconceptions, progress continues to be less than satisfying. An analysis of the literature and our own research has persuaded the CAPER Center for Astronomy and Physics Education Research to put forth model that will allow us to operate on students' learning difficulties in a more fruitful manner. Previously, much of the field's work binned erroneous student thinking into a single construct, and from that basis, curriculum developers and instructors addressed student misconceptions with a single instructional strategy. In contrast this model suggests that "misconceptions" are a mixture of at least four learning barriers: incorrect factual information, inappropriately applied mental algorithms (phenomenological primitives), insufficient cognitive structures (e.g. spatial reasoning), and affective/emotional difficulties. Each of these types of barriers should be addressed with an appropriately designed instructional strategy. Initial applications of this model to learning problems in the Earth & Space Sciences have been fruitful, suggesting that an effort towards categorizing persistent learning difficulties in the geosciences beyond the level of "misconceptions" may allow our community to craft tailored and more effective learning experiences for our students and the general public.

  15. Reconciling fisheries catch and ocean productivity

    PubMed Central

    Stock, Charles A.; Asch, Rebecca G.; Cheung, William W. L.; Dunne, John P.; Friedland, Kevin D.; Lam, Vicky W. Y.; Sarmiento, Jorge L.; Watson, Reg A.

    2017-01-01

    Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained (r = 0.79) with an energy-based model that (i) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, (ii) depresses trophic transfer efficiencies in the tropics and, less critically, (iii) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change. PMID:28115722

  16. Internet of People: Opportunities and challenges for engaging stakeholders in watershed planning via the Web

    NASA Astrophysics Data System (ADS)

    Babbar-Sebens, M.

    2016-12-01

    Social computing technologies are transforming the way our society interacts and generates content on the Web via collective intelligence. Previously unimagined possibilities have arisen for using these technologies to engage stakeholders and involve them in policy making and planning efforts. While the Internet has been used in the past to support education and communication endeavors, we have developed a novel, web-based, interactive planning tool that engages the community in using science-based methods for the design of potential conservation practices on their landscape, and thereby, reducing undesirable impacts of extreme hydroclimatic events. The tool, Watershed REstoration using Spatio-Temporal Optimization of Resources (WRESTORE), uses a democratic voting process coupled with visualization interfaces, computational simulation and optimization models, and user modeling techniques to support a human-centered design approach. This human-centered design approach, which is reinforced by use of Web 2.0 technologies, has the potential to enable policy makers to connect to a larger community of stakeholders and directly engage them in environmental stewardship efforts. Additionally, the design framework can be used by watershed groups to plug-in their own hydrologic models, climate observations and forecasts, and various other simulation models unique to their watersheds. In this presentation, we will demonstrate the effectiveness of WRESTORE for designing alternatives of conservation practices in a HUC-11 Midwestern watershed, results of various experiments with a diverse set of test users and stakeholders, and discuss potential for future developments.

  17. Reconciling fisheries catch and ocean productivity.

    PubMed

    Stock, Charles A; John, Jasmin G; Rykaczewski, Ryan R; Asch, Rebecca G; Cheung, William W L; Dunne, John P; Friedland, Kevin D; Lam, Vicky W Y; Sarmiento, Jorge L; Watson, Reg A

    2017-02-21

    Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained ( r = 0.79) with an energy-based model that ( i ) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, ( ii ) depresses trophic transfer efficiencies in the tropics and, less critically, ( iii ) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change.

  18. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Miller, Dwight Peter

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate theymore » would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.« less

  19. Failed reciprocity in close social relationships and health: findings from the Whitehall II study.

    PubMed

    Chandola, Tarani; Marmot, Michael; Siegrist, Johannes

    2007-10-01

    To extend the model of effort-reward imbalance at work to close and more general social relationships and test the associations with different measures of health. Lack of reciprocity at work is associated with poorer health in a number of studies. However, few studies have analysed the effect of nonreciprocity in other kinds of social relationships on health. The Whitehall II Study is an ongoing prospective study of British civil servants (n=10308 at baseline in 1985-88). Cross-sectional data from the latest phase (7, n=6944 in 2002-04) were used in the analyses. The main exposure was a questionnaire measuring nonreciprocal social relations in partnership, parent-children, and general trusting relationships. Health measures included the SF-36 mental and physical component scores, General Health Questionnaire-30 depression subscale, Jenkins' Sleep disturbance questionnaire, and the Rose Angina questionnaire. Logistic and linear regression models were analysed, adjusted for potential confounders, and mediators of the association. Lack of reciprocity is associated with all measures of poorer health. This association attenuates after adjustment for previous health and additional confounders and mediators but remains significant in a majority of models. Negative social support from a close person is independently associated with reduced health, but adjusting for this effect does not eliminate the association of nonreciprocity with poor health. The effort-reward imbalance at work model has been extended to close and more general social relationships. Lack of reciprocity in partnership, parent-children and general trusting relationships is associated with poorer health.

  20. Recent advances in Ni-H2 technology at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Gonzalezsanabria, O. D.; Britton, D. L.; Smithrick, J. J.; Reid, M. A.

    1986-01-01

    The NASA Lewis Research Center has concentrated its efforts on advancing the Ni-H2 system technology for low Earth orbit applications. Component technology as well as the design principles were studied in an effort to understand the system behavior and failure mechanisms in order to increase performance and extend cycle life. The design principles were previously addressed. The component development is discussed, in particular the separator and nickel electrode and how these efforts will advance the Ni-H2 system technology.

  1. Machine learning and docking models for Mycobacterium tuberculosis topoisomerase I.

    PubMed

    Ekins, Sean; Godbole, Adwait Anand; Kéri, György; Orfi, Lászlo; Pato, János; Bhat, Rajeshwari Subray; Verma, Rinkee; Bradley, Erin K; Nagaraja, Valakunja

    2017-03-01

    There is a shortage of compounds that are directed towards new targets apart from those targeted by the FDA approved drugs used against Mycobacterium tuberculosis. Topoisomerase I (Mttopo I) is an essential mycobacterial enzyme and a promising target in this regard. However, it suffers from a shortage of known inhibitors. We have previously used computational approaches such as homology modeling and docking to propose 38 FDA approved drugs for testing and identified several active molecules. To follow on from this, we now describe the in vitro testing of a library of 639 compounds. These data were used to create machine learning models for Mttopo I which were further validated. The combined Mttopo I Bayesian model had a 5 fold cross validation receiver operator characteristic of 0.74 and sensitivity, specificity and concordance values above 0.76 and was used to select commercially available compounds for testing in vitro. The recently described crystal structure of Mttopo I was also compared with the previously described homology model and then used to dock the Mttopo I actives norclomipramine and imipramine. In summary, we describe our efforts to identify small molecule inhibitors of Mttopo I using a combination of machine learning modeling and docking studies in conjunction with screening of the selected molecules for enzyme inhibition. We demonstrate the experimental inhibition of Mttopo I by small molecule inhibitors and show that the enzyme can be readily targeted for lead molecule development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Narrative event boundaries, reading times, and expectation.

    PubMed

    Pettijohn, Kyle A; Radvansky, Gabriel A

    2016-10-01

    During text comprehension, readers create mental representations of the described events, called situation models. When new information is encountered, these models must be updated or new ones created. Consistent with the event indexing model, previous studies have shown that when readers encounter an event shift, reading times often increase. However, such increases are not consistently observed. This paper addresses this inconsistency by examining the extent to which reading-time differences observed at event shifts reflect an unexpectedness in the narrative rather than processes involved in model updating. In two reassessments of prior work, event shifts known to increase reading time were rated as less expected, and expectedness ratings significantly predicted reading time. In three new experiments, participants read stories in which an event shift was or was not foreshadowed, thereby influencing expectedness of the shift. Experiment 1 revealed that readers do not expect event shifts, but foreshadowing eliminates this. Experiment 2 showed that foreshadowing does not affect identification of event shifts. Finally, Experiment 3 found that, although reading times increased when an event shift was not foreshadowed, they were not different from controls when it was. Moreover, responses to memory probes were slower following an event shift regardless of foreshadowing, suggesting that situation model updating had taken place. Overall, the results support the idea that previously observed reading time increases at event shifts reflect, at least in part, a reader's unexpected encounter with a shift rather than an increase in processing effort required to update a situation model.

  3. Physical and cognitive effort discounting across different reward magnitudes: Tests of discounting models

    PubMed Central

    Ostaszewski, Paweł

    2017-01-01

    The effort required to obtain a rewarding outcome is an important factor in decision-making. Describing the reward devaluation by increasing effort intensity is substantial to understanding human preferences, because every action and choice that we make is in itself effortful. To investigate how reward valuation is affected by physical and cognitive effort, we compared mathematical discounting functions derived from research on discounting. Seven discounting models were tested across three different reward magnitudes. To test the models, data were collected from a total of 114 participants recruited from the general population. For one-parameter models (hyperbolic, exponential, and parabolic), the data were explained best by the exponential model as given by a percentage of explained variance. However, after introducing an additional parameter, data obtained in the cognitive and physical effort conditions were best described by the power function model. Further analysis, using the second order Akaike and Bayesian Information Criteria, which account for model complexity, allowed us to identify the best model among all tested. We found that the power function best described the data, which corresponds to conventional analyses based on the R2 measure. This supports the conclusion that the function best describing reward devaluation by physical and cognitive effort is a concave one and is different from those that describe delay or probability discounting. In addition, consistent magnitude effects were observed that correspond to those in delay discounting research. PMID:28759631

  4. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  5. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  6. Miss-distance indicator for tank main guns

    NASA Astrophysics Data System (ADS)

    Bornstein, Jonathan A.; Hillis, David B.

    1996-06-01

    Tank main gun systems must possess extremely high levels of accuracy to perform successfully in battle. Under some circumstances, the first round fired in an engagement may miss the intended target, and it becomes necessary to rapidly correct fire. A breadboard automatic miss-distance indicator system was previously developed to assist in this process. The system, which would be mounted on a 'wingman' tank, consists of a charged-coupled device (CCD) camera and computer-based image-processing system, coupled with a separate infrared sensor to detect muzzle flash. For the system to be successfully employed with current generation tanks, it must be reliable, be relatively low cost, and respond rapidly maintaining current firing rates. Recently, the original indicator system was developed further in an effort to assist in achieving these goals. Efforts have focused primarily upon enhanced image-processing algorithms, both to improve system reliability and to reduce processing requirements. Intelligent application of newly refined trajectory models has permitted examination of reduced areas of interest and enhanced rejection of false alarms, significantly improving system performance.

  7. Research and demonstration to improve air quality for the U.S. animal feeding operations in the 21st century - a critical review.

    PubMed

    Ni, Ji-Qin

    2015-05-01

    There was an increasing interest in reducing production and emission of air pollutants to improve air quality for animal feeding operations (AFOs) in the U.S. in the 21st century. Research was focused on identification, quantification, characterization, and modeling of air pollutions; effects of emissions; and methodologies and technologies for scientific research and pollution control. Mitigation effects were on pre-excretion, pre-release, pre-emission, and post-emission. More emphasis was given on reducing pollutant emissions than improving indoor air quality. Research and demonstrations were generally continuation and improvement of previous efforts. Most demonstrated technologies were still in a limited scale of application. Future efforts are needed in many fundamental and applied research areas. Advancement in instrumentation, computer technology, and biological sciences and genetic engineering is critical to bring major changes in this area. Development in research and demonstration will depend on the actual political, economic, and environmental situations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A nonlinear viscoelastic approach to durability predictions for polymer based composite structures

    NASA Technical Reports Server (NTRS)

    Brinson, Hal F.

    1991-01-01

    Current industry approaches for the durability assessment of metallic structures are briefly reviewed. For polymer based composite structures, it is suggested that new approaches must be adopted to include memory or viscoelastic effects which could lead to delayed failures that might not be predicted using current techniques. A durability or accelerated life assessment plan for fiber reinforced plastics (FRP) developed and documented over the last decade or so is reviewed and discussed. Limitations to the plan are outlined and suggestions to remove the limitations are given. These include the development of a finite element code to replace the previously used lamination theory code and the development of new specimen geometries to evaluate delamination failures. The new DCB model is reviewed and results are presented. Finally, it is pointed out that new procedures are needed to determine interfacial properties and current efforts underway to determine such properties are reviewed. Suggestions for additional efforts to develop a consistent and accurate durability predictive approach for FRP structures are outlined.

  9. A nonlinear viscoelastic approach to durability predictions for polymer based composite structures

    NASA Technical Reports Server (NTRS)

    Brinson, Hal F.; Hiel, C. C.

    1990-01-01

    Current industry approaches for the durability assessment of metallic structures are briefly reviewed. For polymer based composite structures, it is suggested that new approaches must be adopted to include memory or viscoelastic effects which could lead to delayed failures that might not be predicted using current techniques. A durability or accelerated life assessment plan for fiber reinforced plastics (FRP) developed and documented over the last decade or so is reviewed and discussed. Limitations to the plan are outlined and suggestions to remove the limitations are given. These include the development of a finite element code to replace the previously used lamination theory code and the development of new specimen geometries to evaluate delamination failures. The new DCB model is reviewed and results are presented. Finally, it is pointed out that new procedures are needed to determine interfacial properties and current efforts underway to determine such properties are reviewed. Suggestions for additional efforts to develop a consistent and accurate durability predictive approach for FRP structures is outlined.

  10. Drive-train dynamics technology - State-of-the-art and design of a test facility for advanced development

    NASA Technical Reports Server (NTRS)

    Badgley, R. H.; Fleming, D. P.; Smalley, A. J.

    1975-01-01

    A program for the development and verification of drive-train dynamic technology is described along with its basis and the results expected from it. A central feature of this program is a drive-train test facility designed for the testing and development of advanced drive-train components, including shaft systems, dampers, and couplings. Previous efforts in designing flexible dynamic drive-train systems are reviewed, and the present state of the art is briefly summarized. The design of the test facility is discussed with major attention given to the formulation of the test-rig concept, dynamic scaling of model shafts, and the specification of design parameters. Specific efforts envisioned for the test facility are briefly noted, including evaluations of supercritical test shafts, stability thresholds for various sources and types of instabilities that can exist in shaft systems, effects of structural flexibility on the dynamic performance of dampers, and methods for vibration control in two-level and three-level flexible shaft systems.

  11. A numerical model for thermal energy storage systems utilising encapsulated phase change materials

    NASA Astrophysics Data System (ADS)

    Jacob, Rhys; Saman, Wasim; Bruno, Frank

    2016-05-01

    In an effort to reduce the cost of thermal energy storage for concentrated solar power plants, a thermocline storage concept was investigated. Two systems were investigated being a sensible-only and an encapsulated phase change system. Both systems have the potential to reduce the storage tank volume and/or reduce the cost of the filler material, thereby reducing the cost of the system when compared to current two-tank molten salt systems. The objective of the current paper is to create a numerical model capable of designing and simulating the aforementioned thermocline storage concepts in the open source programming language known as Python. The results of the current study are compared to previous numerical results and are found to be in good agreement.

  12. Will the "Fixes" Fall Flat? Prospects for Quality Measures and Payment Incentives to Control Healthcare Spending.

    PubMed

    Hauswald, Erik; Sklar, David

    2017-04-01

    Payment systems in the US healthcare system have rewarded physicians for services and attempted to control healthcare spending, with rewards and penalties based upon projected goals for future spending. The incorporation of quality goals and alternatives to fee-for-service was introduced to replace the previous system of rewards and penalties. We describe the history of the US healthcare payment system, focusing on Medicare and the efforts to control spending through the Sustainable Growth Rate. We describe the latest evolution of the payment system, which emphasizes quality measurement and alternative payment models. We conclude with suggestions for how to influence physician behavior through education and payment reform so that their behavior aligns with alternative care models to control spending in the future.

  13. Point defect stability in a semicoherent metallic interface

    NASA Astrophysics Data System (ADS)

    González, C.; Iglesias, R.; Demkowicz, M. J.

    2015-02-01

    We present a comprehensive density functional theory (DFT) -based study of different aspects of one vacancy and He impurity atom behavior at semicoherent interfaces between the low-solubility transition metals Cu and Nb. Such interfaces have not been previously modeled using DFT. A thorough analysis of the stability and mobility of the two types of defects at the interfaces and neighboring internal layers has been performed and the results have been compared to the equivalent cases in the pure metallic matrices. The different behavior of fcc and bcc metals on both sides of the interface has been specifically assessed. The modeling effort undertaken is the first attempt to study the stability and defect energetics of noncoherent Cu/Nb interfaces from first principles, in order to assess their potential use in radiation-resistant materials.

  14. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  15. CFD Evaluation of a 3rd Generation LDI Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Mongia, Hukam; Lee, Phil

    2017-01-01

    An effort was undertaken to perform CFD analysis of fluid flow in Lean-Direct Injection (LDI) combustors with axial swirl-venturi elements for next-generation LDI-3 combustor design. The National Combustion Code (NCC) was used to perform non-reacting and two-phase reacting flow computations for a nineteen-element injector array arranged in a three-module, 7-5-7 element configuration. All computations were performed with a consistent approach of mesh-optimization, spray-modeling, ignition and kinetics-modeling with the NCC. Computational predictions of the aerodynamics of the injector were used to arrive at an optimal injector design that meets effective area and fuel-air mixing criteria. LDI-3 emissions (EINOx, EICO and UHC) were compared with the previous generation LDI-2 combustor experimental data at representative engine cycle conditions.

  16. Propulsion Investigation for Zero and Near-Zero Emissions Aircraft

    NASA Technical Reports Server (NTRS)

    Snyder, Christopher A.; Berton, Jeffrey J.; Brown, Gerald v.; Dolce, James L.; Dravid, Marayan V.; Eichenberg, Dennis J.; Freeh, Joshua E.; Gallo, Christopher A.; Jones, Scott M.; Kundu, Krishna P.; hide

    2009-01-01

    As world emissions are further scrutinized to identify areas for improvement, aviation s contribution to the problem can no longer be ignored. Previous studies for zero or near-zero emissions aircraft suggest aircraft and propulsion system sizes that would perform propulsion system and subsystems layout and propellant tankage analyses to verify the weight-scaling relationships. These efforts could be used to identify and guide subsequent work on systems and subsystems to achieve viable aircraft system emissions goals. Previous work quickly focused these efforts on propulsion systems for 70- and 100-passenger aircraft. Propulsion systems modeled included hydrogen-fueled gas turbines and fuel cells; some preliminary estimates combined these two systems. Hydrogen gas-turbine engines, with advanced combustor technology, could realize significant reductions in nitrogen emissions. Hydrogen fuel cell propulsion systems were further laid out, and more detailed analysis identified systems needed and weight goals for a viable overall system weight. Results show significant, necessary reductions in overall weight, predominantly on the fuel cell stack, and power management and distribution subsystems to achieve reasonable overall aircraft sizes and weights. Preliminary conceptual analyses for a combination of gas-turbine and fuel cell systems were also performed, and further studies were recommended. Using gas-turbine engines combined with fuel cell systems can reduce the fuel cell propulsion system weight, but at higher fuel usage than using the fuel cell only.

  17. A new remote hazard and risk assessment framework for glacial lakes in the Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Rounce, David R.; McKinney, Daene C.; Lala, Jonathan M.; Byers, Alton C.; Watson, C. Scott

    2016-08-01

    Glacial lake outburst floods (GLOFs) pose a significant threat to downstream communities and infrastructure due to their potential to rapidly unleash stored lake water. The most common triggers of these GLOFs are mass movement entering the lake and/or the self-destruction of the terminal moraine due to hydrostatic pressures or a buried ice core. This study initially uses previous qualitative and quantitative assessments to understand the hazards associated with eight glacial lakes in the Nepal Himalaya that are widely considered to be highly dangerous. The previous assessments yield conflicting classifications with respect to each glacial lake, which spurred the development of a new holistic, reproducible, and objective approach based solely on remotely sensed data. This remote hazard assessment analyzes mass movement entering the lake, the stability of the moraine, and lake growth in conjunction with a geometric GLOF to determine the downstream impacts such that the present and future risk associated with each glacial lake may be quantified. The new approach is developed within a hazard, risk, and management action framework with the aim that this remote assessment may guide future field campaigns, modeling efforts, and ultimately risk-mitigation strategies. The remote assessment was found to provide valuable information regarding the hazards faced by each glacial lake and results were discussed within the context of the current state of knowledge to help guide future efforts.

  18. Assessing the feasibility, cost, and utility of developing models of human performance in aviation

    NASA Technical Reports Server (NTRS)

    Stillwell, William

    1990-01-01

    The purpose of the effort outlined in this briefing was to determine whether models exist or can be developed that can be used to address aviation automation issues. A multidisciplinary team has been assembled to undertake this effort, including experts in human performance, team/crew, and aviation system modeling, and aviation data used as input to such models. The project consists of two phases, a requirements assessment phase that is designed to determine the feasibility and utility of alternative modeling efforts, and a model development and evaluation phase that will seek to implement the plan (if a feasible cost effective development effort is found) that results from the first phase. Viewgraphs are given.

  19. A Comparative Analysis of Speed Profile Models for Ankle Pointing Movements: Evidence that Lower and Upper Extremity Discrete Movements are Controlled by a Single Invariant Strategy

    PubMed Central

    Michmizos, Konstantinos P.; Vaisman, Lev; Krebs, Hermano Igo

    2014-01-01

    Little is known about whether our knowledge of how the central nervous system controls the upper extremities (UE), can generalize, and to what extent to the lower limbs. Our continuous efforts to design the ideal adaptive robotic therapy for the lower limbs of stroke patients and children with cerebral palsy highlighted the importance of analyzing and modeling the kinematics of the lower limbs, in general, and those of the ankle joints, in particular. We recruited 15 young healthy adults that performed in total 1,386 visually evoked, visually guided, and target-directed discrete pointing movements with their ankle in dorsal–plantar and inversion–eversion directions. Using a non-linear, least-squares error-minimization procedure, we estimated the parameters for 19 models, which were initially designed to capture the dynamics of upper limb movements of various complexity. We validated our models based on their ability to reconstruct the experimental data. Our results suggest a remarkable similarity between the top-performing models that described the speed profiles of ankle pointing movements and the ones previously found for the UE both during arm reaching and wrist pointing movements. Among the top performers were the support-bounded lognormal and the beta models that have a neurophysiological basis and have been successfully used in upper extremity studies with normal subjects and patients. Our findings suggest that the same model can be applied to different “human” hardware, perhaps revealing a key invariant in human motor control. These findings have a great potential to enhance our rehabilitation efforts in any population with lower extremity deficits by, for example, assessing the level of motor impairment and improvement as well as informing the design of control algorithms for therapeutic ankle robots. PMID:25505881

  20. Mathematical simulation of forced expiration.

    PubMed

    Elad, D; Kamm, R D; Shapiro, A H

    1988-07-01

    Flow limitation during forced expiration is simulated by a mathematical model. This model draws on the pressure-area law obtained in the accompanying paper, and the methods of analysis for one-dimensional flow in collapsible tubes developed by Shapiro (Trans. ASME J. Biomech. Eng. 99: 126-147, 1977). These methods represent an improvement over previous models in that 1) the effects of changing lung volume and of parenchymal-bronchial interdependence are simulated; 2) a more realistic representation of collapsed airways is employed; 3) a solution is obtained mouthward of the flow-limiting site by allowing for a smooth transition from sub- to supercritical flow speeds, then matching mouth pressure by imposing an elastic jump (an abrupt transition from super- to subcritical flow speeds) at the appropriate location; and 4) the effects of levels of effort (or vacuum pressure) in excess of those required to produce incipient flow limitation are examined, including the effects of potential physiological limitation.

  1. Aurally-adequate time-frequency analysis for scattered sound in auditoria

    NASA Astrophysics Data System (ADS)

    Norris, Molly K.; Xiang, Ning; Kleiner, Mendel

    2005-04-01

    The goal of this work was to apply an aurally-adequate time-frequency analysis technique to the analysis of sound scattering effects in auditoria. Time-frequency representations were developed as a motivated effort that takes into account binaural hearing, with a specific implementation of interaural cross-correlation process. A model of the human auditory system was implemented in the MATLAB platform based on two previous models [A. Härmä and K. Palomäki, HUTear, Espoo, Finland; and M. A. Akeroyd, A. Binaural Cross-correlogram Toolbox for MATLAB (2001), University of Sussex, Brighton]. These stages include proper frequency selectivity, the conversion of the mechanical motion of the basilar membrane to neural impulses, and binaural hearing effects. The model was then used in the analysis of room impulse responses with varying scattering characteristics. This paper discusses the analysis results using simulated and measured room impulse responses. [Work supported by the Frank H. and Eva B. Buck Foundation.

  2. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    PubMed

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Development of high resolution simulations of the atmospheric environment using the MASS model

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Zack, John W.; Karyampudi, V. Mohan

    1989-01-01

    Numerical simulations were performed with a very high resolution (7.25 km) version of the MASS model (Version 4.0) in an effort to diagnose the vertical wind shear and static stability structure during the Shuttle Challenger disaster which occurred on 28 January 1986. These meso-beta scale simulations reveal that the strongest vertical wind shears were concentrated in the 200 to 150 mb layer at 1630 GMT, i.e., at about the time of the disaster. These simulated vertical shears were the result of two primary dynamical processes. The juxtaposition of both of these processes produced a shallow (30 mb deep) region of strong vertical wind shear, and hence, low Richardson number values during the launch time period. Comparisons with the Cape Canaveral (XMR) rawinsonde indicates that the high resolution MASS 4.0 simulation more closely emulated nature than did previous simulations of the same event with the GMASS model.

  4. Surprise braking trials, time-to-collision judgments, and "first look" maneuvers under realistic rear-end crash scenarios

    DOT National Transportation Integrated Search

    2005-08-01

    This project continues to build upon the foundation provided by the human factors experimentation conducted in the previous Crash Avoidance Metrics Partnership (CAMP) Forward Collision Warning (FCW) system efforts. As in the previous CAMP FCW researc...

  5. Generalized topology for resonators having N commensurate harmonics

    NASA Astrophysics Data System (ADS)

    Danzi, Francesco; Gibert, James M.; Frulla, Giacomo; Cestino, Enrico

    2018-04-01

    Despite the ubiquity of both linear and nonlinear multimember resonators in MEMS and kinetic energy harvesting devices very few research efforts examine the orientation of members in the resonator on its dynamic behavior. Previous efforts to design this type of resonator constrains the members to have relative orientations that are 0○ or 90○ to each other, i.e., the elements are connected inline with adjoining members or are perpendicular to adjoining members. The work expands upon the existing body of research by considering the effect of the relative orientation between members on the dynamic behavior of the system. In this manuscript, we derive a generalized reduced-order model for the design of a multi-member planar resonator that has integer multiple modal frequencies. The model is based on a Rayleigh Ritz approximation where the number of degrees of freedom equals the number of structural members in the resonator. The analysis allows the generation of design curves, representing all the possible solutions for modal frequencies that are commensurate. The generalized model, valid for an N-DOF structure, is then restricted for a 2- and 3-DOF system/member resonator, where the linear dynamic behavior of the resonator is investigated in depth. Furthermore, this analysis demonstrates a rule of thumb; relaxing restrictions on the relative orientation of members in a planar structure, allows the structure to exhibit exactly N commensurable frequencies if it contains N members.

  6. Effects of simulated rare earth recycling wastewaters on biological nitrification

    DOE PAGES

    Fujita, Yoshiko; Barnes, Joni; Eslamimanesh, Ali; ...

    2015-07-16

    Current efforts to increase domestic availability of rare-earth element (REE) supplies by recycling and expanded ore processing efforts will result in increased generation of associated wastewaters. In some cases disposal to a sewage treatment plant may be favored but plant performance must be maintained. To assess the potential effects of such wastewaters on biological wastewater treatment, model nitrifying organisms Nitrosomonas europaea and Nitrobacter winogradskyi were exposed to simulated wastewaters containing varying levels of yttrium or europium (10, 50 and 100 ppm), and the REE extractant tributyl phosphate (TBP, at 0.1 g/L). Y and Eu additions above 10 ppm inhibited N.more » europaea activity, even when initially virtually all of the REE was insoluble. The provision of TBP together with Eu increased inhibition of nitrite production by the N. europaea, although TBP alone did not substantially alter nitrifying activity N. winogradskyi was more sensitive to the stimulated wastewaters, with even 10 ppm Eu or Y inducing significant inhibition, and a complete shutdown of nitrifying activity occurred in the presence of the TBP. To analyze the availability of REEs in aqueous solutions, REE solubility has been calculated using the previously developed MSE (Mixed-Solvent Electrolyte) thermodynamic model. The model calculations reveal a strong pH dependence of solubility, which is typically controlled by the precipitation of REE hydroxides but may also be influenced by the formation of a phosphate phase.« less

  7. Antisocial Traits, Distress Tolerance, and Alcohol Problems as Predictors of Intimate Partner Violence in Men Arrested for Domestic Violence.

    PubMed

    Brem, Meagan J; Florimbio, Autumn Rae; Elmquist, JoAnna; Shorey, Ryan C; Stuart, Gregory L

    2018-01-01

    Men with antisocial personality disorder (ASPD) traits are at an increased risk for consuming alcohol and perpetrating intimate partner violence (IPV). However, previous research has neglected malleable mechanisms potentially responsible for the link between ASPD traits, alcohol problems, and IPV perpetration. Efforts to improve the efficacy of batterer intervention programs (BIPs) would benefit from exploration of such malleable mechanisms. The present study is the first to examine distress tolerance as one such mechanism linking men's ASPD traits to their alcohol problems and IPV perpetration. Using a cross-sectional sample of 331 men arrested for domestic violence and court-referred to BIPs, the present study used structural equation modeling to examine pathways from men's ASPD traits to IPV perpetration directly and indirectly through distress tolerance and alcohol problems. Results supported a two-chain partial mediational model. ASPD traits were related to psychological aggression perpetration directly and indirectly via distress tolerance and alcohol problems. A second pathway emerged by which ASPD traits related to higher levels of alcohol problems, which related to psychological aggression perpetration. Controlling for psychological aggression perpetration, neither distress tolerance nor alcohol problems explained the relation between ASPD traits and physical assault perpetration. These results support and extend existing conceptual models of IPV perpetration. Findings suggest intervention efforts for IPV should target both distress tolerance and alcohol problems.

  8. Effects of simulated rare earth recycling wastewaters on biological nitrification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, Yoshiko; Barnes, Joni; Eslamimanesh, Ali

    Current efforts to increase domestic availability of rare-earth element (REE) supplies by recycling and expanded ore processing efforts will result in increased generation of associated wastewaters. In some cases disposal to a sewage treatment plant may be favored but plant performance must be maintained. To assess the potential effects of such wastewaters on biological wastewater treatment, model nitrifying organisms Nitrosomonas europaea and Nitrobacter winogradskyi were exposed to simulated wastewaters containing varying levels of yttrium or europium (10, 50 and 100 ppm), and the REE extractant tributyl phosphate (TBP, at 0.1 g/L). Y and Eu additions above 10 ppm inhibited N.more » europaea activity, even when initially virtually all of the REE was insoluble. The provision of TBP together with Eu increased inhibition of nitrite production by the N. europaea, although TBP alone did not substantially alter nitrifying activity N. winogradskyi was more sensitive to the stimulated wastewaters, with even 10 ppm Eu or Y inducing significant inhibition, and a complete shutdown of nitrifying activity occurred in the presence of the TBP. To analyze the availability of REEs in aqueous solutions, REE solubility has been calculated using the previously developed MSE (Mixed-Solvent Electrolyte) thermodynamic model. The model calculations reveal a strong pH dependence of solubility, which is typically controlled by the precipitation of REE hydroxides but may also be influenced by the formation of a phosphate phase.« less

  9. Flexible Packaging Concept for a Space Suit Portable Life Support Subsystem

    NASA Technical Reports Server (NTRS)

    Thomas, Gretchen; Dillon, Paul; Oliver, Joe; Zapata, Felipe

    2009-01-01

    Neither the Shuttle Extravehicular Mobility Unit (EMU), the space suit currently used for space shuttle and International Space Station (ISS) missions, nor the Apollo EMU, the space suit successfully used on previous lunar missions, will satisfy the requirements for the next generation Constellation Program (CxP) lunar suit. The CxP system or Constellation Space Suit Element (CSSE) must be able to tolerate more severe environmental and use conditions than any previous system. These conditions include missions to the severely cold lunar poles and up to 100 Extravehicular Activity (EVA) excursions without ground maintenance. Much effort is focused on decreasing the mass and volume of the Portable Life Support Subsystem (PLSS) over previous suit designs in order to accommodate the required increase in functionality. This paper documents the progress of a conceptual packaging effort of a flexible backpack for the CSSE PLSS. The flexible backpack concept relies on a foam protection system to absorb, distribute, and dissipate the energy from falls on the lunar surface. Testing and analysis of the foam protection system concept that was conducted during this effort indicates that this method of system packaging is a viable solution.

  10. Stability of Shapes Held by Surface Tension and Subjected to Flow

    NASA Technical Reports Server (NTRS)

    Chen, Yi-Ju; Robinson, Nathaniel D.; Steen, Paul H.

    1999-01-01

    Results of three problems are summarized in this contribution. Each involves the fundamental capillary instability of an interfacial bridge and is an extension of previous work. The first two problems concern equilibrium shapes of liquid bridges near the stability boundary corresponding to maximum length (Plateau-Rayleigh limit). For the first problem, a previously formulated nonlinear theory to account for imposed gravity and interfacial shear disturbances in an isothermal environment is quantitatively tested in experiment. For the second problem, the liquid bridge is subjected to a shear that models the effect of a thermocapillary flow generated by a ring heater in a liquid encapsulated float-zone configuration. In the absence of gravity, this symmetric perturbation can stabilize the bridge to lengths on the order of 30 percent beyond the Plateau-Rayleigh limit, which is on the order of heretofore unexplained Shuttle observations. The third problem considers the dynamics of collapse and pinchoff of a film bridge (no gravity), which happens in the absence of stabilization. Here, we summarize experimental efforts to measure the self-similar cone-and-crater structure predicted by a previous theory.

  11. Community Markets for Conservation (COMACO) links biodiversity conservation with sustainable improvements in livelihoods and food production

    PubMed Central

    Lewis, Dale; Bell, Samuel D.; Fay, John; Bothi, Kim L.; Gatere, Lydiah; Kabila, Makando; Mukamba, Mwangala; Matokwani, Edwin; Mushimbalume, Matthews; Moraru, Carmen I.; Lehmann, Johannes; Lassoie, James; Wolfe, David; Lee, David R.; Buck, Louise; Travis, Alexander J.

    2011-01-01

    In the Luangwa Valley, Zambia, persistent poverty and hunger present linked challenges to rural development and biodiversity conservation. Both household coping strategies and larger-scale economic development efforts have caused severe natural resource degradation that limits future economic opportunities and endangers ecosystem services. A model based on a business infrastructure has been developed to promote and maintain sustainable agricultural and natural resource management practices, leading to direct and indirect conservation outcomes. The Community Markets for Conservation (COMACO) model operates primarily with communities surrounding national parks, strengthening conservation benefits produced by these protected areas. COMACO first identifies the least food-secure households and trains them in sustainable agricultural practices that minimize threats to natural resources while meeting household needs. In addition, COMACO identifies people responsible for severe natural resource depletion and trains them to generate alternative income sources. In an effort to maintain compliance with these practices, COMACO provides extension support and access to high-value markets that would otherwise be inaccessible to participants. Because the model is continually evolving via adaptive management, success or failure of the model as a whole is difficult to quantify at this early stage. We therefore test specific hypotheses and present data documenting the stabilization of previously declining wildlife populations; the meeting of thresholds of productivity that give COMACO access to stable, high-value markets and progress toward economic self-sufficiency; and the adoption of sustainable agricultural practices by participants and other community members. Together, these findings describe a unique, business-oriented model for poverty alleviation, food production, and biodiversity conservation. PMID:21873184

  12. Community Markets for Conservation (COMACO) links biodiversity conservation with sustainable improvements in livelihoods and food production.

    PubMed

    Lewis, Dale; Bell, Samuel D; Fay, John; Bothi, Kim L; Gatere, Lydiah; Kabila, Makando; Mukamba, Mwangala; Matokwani, Edwin; Mushimbalume, Matthews; Moraru, Carmen I; Lehmann, Johannes; Lassoie, James; Wolfe, David; Lee, David R; Buck, Louise; Travis, Alexander J

    2011-08-23

    In the Luangwa Valley, Zambia, persistent poverty and hunger present linked challenges to rural development and biodiversity conservation. Both household coping strategies and larger-scale economic development efforts have caused severe natural resource degradation that limits future economic opportunities and endangers ecosystem services. A model based on a business infrastructure has been developed to promote and maintain sustainable agricultural and natural resource management practices, leading to direct and indirect conservation outcomes. The Community Markets for Conservation (COMACO) model operates primarily with communities surrounding national parks, strengthening conservation benefits produced by these protected areas. COMACO first identifies the least food-secure households and trains them in sustainable agricultural practices that minimize threats to natural resources while meeting household needs. In addition, COMACO identifies people responsible for severe natural resource depletion and trains them to generate alternative income sources. In an effort to maintain compliance with these practices, COMACO provides extension support and access to high-value markets that would otherwise be inaccessible to participants. Because the model is continually evolving via adaptive management, success or failure of the model as a whole is difficult to quantify at this early stage. We therefore test specific hypotheses and present data documenting the stabilization of previously declining wildlife populations; the meeting of thresholds of productivity that give COMACO access to stable, high-value markets and progress toward economic self-sufficiency; and the adoption of sustainable agricultural practices by participants and other community members. Together, these findings describe a unique, business-oriented model for poverty alleviation, food production, and biodiversity conservation.

  13. Analyzing Data and Asking Questions at Shell School, Sea County Florida

    ERIC Educational Resources Information Center

    Vanover, Charles

    2015-01-01

    This case discusses early work to implement the Common Core State Standards at a fictitious school in Florida. The case is designed to support students' efforts to use school accountability data for inquiry and to conceptualize change in schools where previous leaders' efforts were not successful. Shell Elementary is an exurban school that serves…

  14. Gains to Language Learners from Viewing Target Language Closed-Captioned Films

    ERIC Educational Resources Information Center

    Stewart, Melissa A.; Pertusa, Inmaculada

    2004-01-01

    In an effort to facilitate students' understanding of films in the target language, many instructors turn to films with English subtitles. Viewing films subtitled in English does not encourage learners to use their previously acquired listening skills, but rather allows them to rely on reading English instead of making the extra effort required to…

  15. Commercial Insurance vs Community-Based Health Plans: Time for a Policy Option With Clinical Emphasis to Address the Cost Spiral

    ERIC Educational Resources Information Center

    Amundson, Bruce

    2005-01-01

    The nation continues its ceaseless struggle with the spiraling cost of health care. Previous efforts (regulation, competition, voluntary action) have included almost every strategy except clinical. Insurers have largely failed in their cost-containment efforts. There is a strong emerging body of literature that demonstrates the relationship…

  16. Adaptive Reward Pursuit: How Effort Requirements Affect Unconscious Reward Responses and Conscious Reward Decisions

    ERIC Educational Resources Information Center

    Bijleveld, Erik; Custers, Ruud; Aarts, Henk

    2012-01-01

    When in pursuit of rewards, humans weigh the value of potential rewards against the amount of effort that is required to attain them. Although previous research has generally conceptualized this process as a deliberate calculation, recent work suggests that rudimentary mechanisms--operating without conscious intervention--play an important role as…

  17. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    NASA Astrophysics Data System (ADS)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  18. Overcommitment as a predictor of effort-reward imbalance: evidence from an 8-year follow-up study.

    PubMed

    Feldt, Taru; Hyvönen, Katriina; Mäkikangas, Anne; Rantanen, Johanna; Huhtala, Mari; Kinnunen, Ulla

    2016-07-01

    The effort-reward imbalance (ERI) model includes the personal characteristic of overcommitment (OC) and the job-related characteristics of effort, reward, and ERI, all of which are assumed to play a role in an employee's health and well-being at work. The aim of the present longitudinal study was to shed more light on the dynamics of the ERI model by investigating the basic hypotheses related to the role of OC in the model, ie, to establish whether an employee's OC could be a risk factor for an increased experience of high effort, low reward, and high ERI at work. The study was based on 5-wave, 8-year follow-up data collected among Finnish professionals in 2006 (T1, N=747), 2008 (T2, N=422), 2010 (T3, N=368), 2012 (T4, N=325), and 2014 (T5, N=273). The participants were mostly male (85% at T1) and the majority of them worked in technical fields. OC, effort, reward, and ERI were measured at each time point with the 23-item ERI scale. Three cross-lagged structural equation models (SEM) were estimated and compared by using full information maximum likelihood method: (i) OC predicted later experiences of effort, reward, and ERI (normal causation model), (ii) effort, reward, and ERI predicted later OC (reversed causation model), and (iii) associations in normal causal and reversed causal models were simultaneously valid (reciprocal causation model). The results supported the normal causation model: strong OC predicted later experiences of high effort, low reward and high ERI. High OC is a risk factor for an increased experience of job strain factors; that is, high effort, low reward, and high ERI. Thus, OC is a risk factor not only for an employee's well-being and health but also for an increasing risk for perceiving adverse job strain factors in the working environment.

  19. Choice and explanation in medical management: a multiattribute model of artificial intelligence approaches.

    PubMed

    Rennels, G D; Shortliffe, E H; Miller, P L

    1987-01-01

    This paper explores a model of choice and explanation in medical management and makes clear its advantages and limitations. The model is based on multiattribute decision making (MADM) and consists of four distinct strategies for choice and explanation, plus combinations of these four. Each strategy is a restricted form of the general MADM approach, and each makes restrictive assumptions about the nature of the domain. The advantage of tailoring a restricted form of a general technique to a particular domain is that such efforts may better capture the character of the domain and allow choice and explanation to be more naturally modelled. The uses of the strategies for both choice and explanation are illustrated with analyses of several existing medical management artificial intelligence (AI) systems, and also with examples from the management of primary breast cancer. Using the model it is possible to identify common underlying features of these AI systems, since each employs portions of this model in different ways. Thus the model enables better understanding and characterization of the seemingly ad hoc decision making of previous systems.

  20. Trimming a hazard logic tree with a new model-order-reduction technique

    USGS Publications Warehouse

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  1. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  2. Impact of alcohol on lane placement and glance patterns when passing a parked active law enforcement vehicle.

    DOT National Transportation Integrated Search

    2014-10-01

    For this project, researchers used an existing dataset from a previous research effort to investigate the moth effect : theory, where it is believed that drivers drift toward bright lights. While the previous research study primarily : focused on sig...

  3. COS-7-based model: methodological approach to study John Cunningham virus replication cycle.

    PubMed

    Prezioso, C; Scribano, D; Rodio, D M; Ambrosi, C; Trancassini, M; Palamara, A T; Pietropaolo, V

    2018-02-05

    John Cunningham virus (JCV) is a human neurotropic polyomavirus whose replication in the Central Nervous System (SNC) induces the fatal demyelinating disease, progressive multifocal leukoencephalopathy (PML). JCV propagation and PML investigation have been severely hampered by the lack of an animal model and cell culture systems to propagate JCV have been very limited in their availability and robustness. We previously confirmed that JCV CY strain efficiently replicated in COS-7 cells as demonstrated by the progressive increase of viral load by quantitative PCR (Q-PCR) during the time of transfection and that archetypal regulatory structure was maintained, although two characteristic point mutations were detected during the viral cycle. This short report is an important extension of our previous efforts in defining our reliable model culture system able to support a productive JCV infection.Supernatants collected from transfected cells have been used to infect freshly seeded COS-7 cell line. An infectious viral progeny was obtained as confirmed by Western blot and immunofluorescence assay. During infection, the archetype regulatory region was conserved.Importantly, in this study we developed an improved culture system to obtain a large scale production of JC virus in order to study the genetic features, the biology and the pathogenic mechanisms of JC virus that induce PML.

  4. Occupational stress and strain in the Royal Navy 2007.

    PubMed

    Bridger, R S; Brasher, K; Dew, A; Kilminster, S

    2008-12-01

    Previous surveys of psychological strain in the Naval Service (NS) have shown higher than expected levels of strain when compared to the general population. To repeat the survey last carried out in 2004 and to obtain further information on the nature of the occupational stressors associated with strain. General Health Questionnaire-12 strain rates and job/life stressors were measured using a Work and Well-Being Questionnaire. Models of strain were developed for male and female personnel in the Royal Navy (RN) and males in the Royal Marines (RM). The response rate was 57%. The psychological strain rate was 31.5% overall. Personnel suffering from strain tended to be 'overcommitted' to work, had low levels of commitment to the NS and had suffered stressful life events (SLEs) in the previous 12 months. Strain rates declined with age and rank in males, but not in females. Strain was significantly positively correlated with levels of overcommitment, effort-reward imbalance (ERI), role conflict, work-family conflict, organizational commitment and exposure to SLEs. Models of strain in the males and females in the RN and in the RM accounted for between 37 and 44% of the variance in strain. The survey provides evidence for both the demand control and ERI models-components of these models contribute independently to strain. High levels of commitment to the organization were associated with lower strain and exposure to SLEs to higher strain.

  5. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    PubMed

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.

  6. Capturing and Understanding Experiment Provenance using NiNaC

    NASA Astrophysics Data System (ADS)

    Rosati, C.

    2017-12-01

    A problem the model development team faces at the GFDL is determining climate model experiment provenance. Each experiment is configured with at least one configuration file which may reference other files. The experiment then passes through three phases before completion. Configuration files or other input files may be modified between phases. Finding the modifications later is tedious due to the expanse of the experiment input and duplication across phases. Determining provenance may be impossible if any file has been changed or deleted. To reduce these efforts and address these problems, we propose a new toolset, NiNaC, for archiving experiment provenance from the beginning of the experiment to the end and every phase in-between. Each of the three phases, check-out, build, and run, of the experiment depends on the previous phase. We use a graph to model the phase dependencies. Let each phase be represented by a node. Let each edge correspond to a dependency between phases where the node incident with the tail depends on the node incident with the head. It follows that the dependency graph is a tree. We reduce the problem to finding the lowest common ancestor and diffing the successor nodes. All files related to input for a phase are assigned a checksum. A new file is created to aggregate the checksums. Then each phase is assigned a checksum of aforementioned file as an identifier. Any change to part of a phase configuration will create unique checksums in all subsequent phases. Finding differences between experiments with this toolset is as simple as diffing two files containing checksums found by traversing the tree. One new benefit is that this toolset now allows differences in source code to be found after experiments are run, which was previously impossible for executables that cannot be linked to a known version controlled source code. Knowing that these changes exist allows us to give priority to help desk tickets concerning unmodified supported experiment releases, and minimize effort spent on unsupported experiments. It is also possible that a change is made, either by mistake or by system error. NiNaC would find the exact file in the precise phase with the change. In this way, NiNaC makes provenance tracking less tedious and solves problems where tracking provenance may previously have been impossible to do.

  7. Accounting for escape mortality in fisheries: implications for stock productivity and optimal management.

    PubMed

    Baker, Matthew R; Schindler, Daniel E; Essington, Timothy E; Hilborn, Ray

    2014-01-01

    Few studies have considered the management implications of mortality to target fish stocks caused by non-retention in commercial harvest gear (escape mortality). We demonstrate the magnitude of this previously unquantified source of mortality and its implications for the population dynamics of exploited stocks, biological metrics, stock productivity, and optimal management. Non-retention in commercial gillnet fisheries for Pacific salmon (Oncorhynchus spp.) is common and often leads to delayed mortality in spawning populations. This represents losses, not only to fishery harvest, but also in future recruitment to exploited stocks. We estimated incidence of non-retention in Alaskan gillnet fisheries for sockeye salmon (O. nerka) and found disentanglement injuries to be extensive and highly variable between years. Injuries related to non-retention were noted in all spawning populations, and incidence of injury ranged from 6% to 44% of escaped salmon across nine river systems over five years. We also demonstrate that non-retention rates strongly correlate with fishing effort. We applied maximum likelihood and Bayesian approaches to stock-recruitment analyses, discounting estimates of spawning salmon to account for fishery-related mortality in escaped fish. Discounting spawning stock estimates as a function of annual fishing effort improved model fits to historical stock-recruitment data in most modeled systems. This suggests the productivity of exploited stocks has been systematically underestimated. It also suggests that indices of fishing effort may be used to predict escape mortality and correct for losses. Our results illustrate how explicitly accounting for collateral effects of fishery extraction may improve estimates of productivity and better inform management metrics derived from estimates of stock-recruitment analyses.

  8. Temperament and fracture in preschool-aged children.

    PubMed

    Ryckman, Kandace; Richmond, Sarah A; Anderson, Laura N; Birken, Catherine S; Parkin, Patricia C; Macarthur, Colin; Maguire, Jonathon L; Howard, Andrew W

    2017-07-01

    Approximately one-half of all children will sustain a fracture before adulthood. Understanding the factors that place a child at increased risk of fracture is necessary to inform effective injury prevention strategies. The purpose of this study was to examine the association between temperament and fracture risk in preschool-aged children. Children aged 3 to 6 years who were diagnosed with a fracture were recruited from the Hospital for Sick Children Fracture Clinic. Using a retrospective case-control study design, the 148 cases were frequency-matched by age and sex to 426 controls from the TARGet Kids primary care paediatric cohort. The Childhood Behaviour Questionnaire, a 36-item caregiver response questionnaire was used to assess three of the following temperament factors: surgency (e.g., high activity level), negative affect (e.g., anger, fear, discomfort) and effortful control (e.g., attentional focusing). Unadjusted logistic models demonstrated no association between children with previous fracture and higher scores of surgency (unadjusted odds ratio [OR]=1.06, 95% confidence interval [CI]: 0.84, 1.34), negative affect (unadjusted OR=1.15, 95% CI: 0.93, 1.42) or effortful control (unadjusted OR=0.80, 95% CI: 0.63, 1.03). Further, models adjusted for covariates also demonstrated no significant association with surgency (1.00, 95% CI: 0.78, 1.29), negative affect (1.09, 95% CI: 0.86, 1.37) and effortful control (0.80, 95% CI: 0.61, 1.05). None of the three main temperament types identified by the Childhood Behaviour Questionnaire were associated with an increase in fracture risk.

  9. Quantifying risks with exact analytical solutions of derivative pricing distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Liu, Jing; Wang, Erkang; Wang, Jin

    2017-04-01

    Derivative (i.e. option) pricing is essential for modern financial instrumentations. Despite of the previous efforts, the exact analytical forms of the derivative pricing distributions are still challenging to obtain. In this study, we established a quantitative framework using path integrals to obtain the exact analytical solutions of the statistical distribution for bond and bond option pricing for the Vasicek model. We discuss the importance of statistical fluctuations away from the expected option pricing characterized by the distribution tail and their associations to value at risk (VaR). The framework established here is general and can be applied to other financial derivatives for quantifying the underlying statistical distributions.

  10. General Aviation Interior Noise. Part 2; In-Flight Source/Verification

    NASA Technical Reports Server (NTRS)

    Unruh, James F.; Till, Paul D.; Palumbo, Daniel L. (Technical Monitor)

    2002-01-01

    The technical approach made use of the Cessna Model 182E aircraft used in the previous effort as a test bed for noise control application. The present phase of the project reports on flight test results during application of various passive noise treatments in an attempt to verify the noise sources and paths for the aircraft. The data presented establishes the level of interior noise control that can be expected for various passive noise control applications within the aircraft cabin. Subsequent testing will address specific testing to demonstrate the technology available to meet a specified level of noise control by application of passive and/or active noise control technology.

  11. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    PubMed

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  12. Effect of fishing effort on catch rate and catchability of largemouth bass in small impoundments

    USGS Publications Warehouse

    Wegener, M. G.; Schramm, Harold; Neal, J. W.; Gerard, P.D.

    2018-01-01

    Largemouth bass Micropterus salmoides (Lacepède) catch rates decline with sustained fishing effort, even without harvest. It is unclear why declines in catch rate occur, and little research has been directed at how to improve catch rate. Learning has been proposed as a reason for declining catch rate, but has never been tested on largemouth bass. If catch rate declines because fish learn to avoid lures, periods of no fishing could be a management tool for increasing catch rate. In this study, six small impoundments with established fish populations were fished for two May to October fishing seasons to evaluate the effect of fishing effort on catch rate. Closed seasons were implemented to test whether a 2‐month period of no fishing improved catch rates and to determine whether conditioning from factors other than being captured reduced catch rate. Mixed‐model analysis indicated catch rate and catchability declined throughout the fishing season. Catch rate and catchability increased after a 2‐month closure but soon declined to the lowest levels of the fishing season. These changes in catch rate and catchability support the conclusion of learned angler avoidance, but sustained catchability of fish not previously caught does not support that associative or social learning affected catchability.

  13. Establishing a Framework for Community Modeling in Hydrologic Science: Recommendations from the CUAHSI CHyMP Initiative

    NASA Astrophysics Data System (ADS)

    Arrigo, J. S.; Famiglietti, J. S.; Murdoch, L. C.; Lakshmi, V.; Hooper, R. P.

    2012-12-01

    The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) continues a major effort towards supporting Community Hydrologic Modeling. From 2009 - 2011, the Community Hydrologic Modeling Platform (CHyMP) initiative held three workshops, the ultimate goal of which was to produce recommendations and an implementation plan to establish a community modeling program that enables comprehensive simulation of water anywhere on the North American continent. Such an effort would include connections to and advances in global climate models, biogeochemistry, and efforts of other disciplines that require an understanding of water patterns and processes in the environment. To achieve such a vision will require substantial investment in human and cyber-infrastructure and significant advances in the science of hydrologic modeling and spatial scaling. CHyMP concluded with a final workshop, held March 2011, and produced several recommendations. CUAHSI and the university community continue to advance community modeling and implement these recommendations through several related and follow on efforts. Key results from the final 2011 workshop included agreement among participants that the community is ready to move forward with implementation. It is recognized that initial implementation of this larger effort can begin with simulation capabilities that currently exist, or that can be easily developed. CHyMP identified four key activities in support of community modeling: benchmarking, dataset evaluation and development, platform evaluation, and developing a national water model framework. Key findings included: 1) The community supported the idea of a National Water Model framework; a community effort is needed to explore what the ultimate implementation of a National Water Model is. A true community modeling effort would support the modeling of "water anywhere" and would include all relevant scales and processes. 2) Implementation of a community modeling program could initially focus on continental scale modeling of water quantity (rather than quality). The goal of this initial model is the comprehensive description of water stores and fluxes in such a way to permit linkage to GCM's, biogeochemical, ecological, and geomorphic models. This continental scale focus allows systematic evaluation of our current state of knowledge and data, leverages existing efforts done by large scale modelers, contributes to scientific discovery that informs globally and societal relevant questions, and provides an initial framework to evaluate hydrologic information relevant to other disciplines and a structure into which to incorporate other classes of hydrologic models. 3) Dataset development will be a key aspect of any successful national water model implementation. Our current knowledge of the subsurface is limiting our ability to truly integrate soil and groundwater into large scale models, and to answering critical science questions with societal relevance (i.e. groundwater's influence on climate). 4) The CHyMP workshops and efforts to date have achieved collaboration between university scientists, government agencies and the private sector that must be maintained. Follow on efforts in community modeling should aim at leveraging and maintaining this collaboration for maximum scientific and societal benefit.

  14. Artificial intelligence-based computer modeling tools for controlling slag foaming in electric arc furnaces

    NASA Astrophysics Data System (ADS)

    Wilson, Eric Lee

    Due to increased competition in a world economy, steel companies are currently interested in developing techniques that will allow for the improvement of the steelmaking process, either by increasing output efficiency or by improving the quality of their product, or both. Slag foaming is one practice that has been shown to contribute to both these goals. However, slag foaming is highly dynamic and difficult to model or control. This dissertation describes an effort to use artificial intelligence-based tools (genetic algorithms, fuzzy logic, and neural networks) to both model and control the slag foaming process. Specifically, a neural network is trained and tested on slag foaming data provided by a steel plant. This neural network model is then controlled by a fuzzy logic controller, which in turn is optimized by a genetic algorithm. This tuned controller is then installed at a steel plant and given control be a more efficient slag foaming controller than what was previously used by the steel plant.

  15. Interpretation of Ground Temperature Anomalies in Hydrothermal Discharge Areas

    NASA Astrophysics Data System (ADS)

    Price, A. N.; Lindsey, C.; Fairley, J. P., Jr.

    2017-12-01

    Researchers have long noted the potential for shallow hydrothermal fluids to perturb near-surface temperatures. Several investigators have made qualitative or semi-quantitative use of elevated surface temperatures; for example, in snowfall calorimetry, or for tracing subsurface flow paths. However, little effort has been expended to develop a quantitative framework connecting surface temperature observations with conditions in the subsurface. Here, we examine an area of shallow subsurface flow at Burgdorf Hot Springs, in the Payette National Forest, north of McCall, Idaho USA. We present a simple analytical model that uses easily-measured surface data to infer the temperatures of laterally-migrating shallow hydrothermal fluids. The model is calibrated using shallow ground temperature measurements and overburden thickness estimates from seismic refraction studies. The model predicts conditions in the shallow subsurface, and suggests that the Biot number may place a more important control on the expression of near-surface thermal perturbations than previously thought. In addition, our model may have application in inferring difficult-to-measure parameters, such as shallow subsurface discharge from hydrothermal springs.

  16. Ignition and combustion characteristics of metallized propellants

    NASA Technical Reports Server (NTRS)

    Mueller, D. C.; Turns, Stephen R.

    1991-01-01

    Over the past six months, experimental investigations were continued and theoretical work on the secondary atomization process was begun. Final shakedown of the sizing/velocity measuring system was completed and the aluminum combustion detection system was modified and tested. Atomizer operation was improved to allow steady state operation over long periods of time for several slurries. To validate the theoretical modeling, work involving carbon slurry atomization and combustion was begun and qualitative observations were made. Simultaneous measurements of aluminum slurry droplet size distributions and detection of burning aluminum particles were performed at several axial locations above the burner. The principle theoretical effort was the application of a rigid shell formation model to aluminum slurries and an investigation of the effects of various parameters on the shell formation process. This shell formation model was extended to include the process leading up to droplet disruption, and previously developed analytical models were applied to yield theoretical aluminum agglomerate ignition and combustion times. The several theoretical times were compared with the experimental results.

  17. Numerical formulation for the prediction of solid/liquid change of a binary alloy

    NASA Technical Reports Server (NTRS)

    Schneider, G. E.; Tiwari, S. N.

    1990-01-01

    A computational model is presented for the prediction of solid/liquid phase change energy transport including the influence of free convection fluid flow in the liquid phase region. The computational model considers the velocity components of all non-liquid phase change material control volumes to be zero but fully solves the coupled mass-momentum problem within the liquid region. The thermal energy model includes the entire domain and uses an enthalpy like model and a recently developed method for handling the phase change interface nonlinearity. Convergence studies are performed and comparisons made with experimental data for two different problem specifications. The convergence studies indicate that grid independence was achieved and the comparison with experimental data indicates excellent quantitative prediction of the melt fraction evolution. Qualitative data is also provided in the form of velocity vector diagrams and isotherm plots for selected times in the evolution of both problems. The computational costs incurred are quite low by comparison with previous efforts on solving these problems.

  18. Polar bears in the Beaufort Sea: A 30-year mark-recapture case history

    USGS Publications Warehouse

    Amstrup, Steven C.; McDonald, T.L.; Stirling, I.

    2001-01-01

    Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.

  19. How important are coastal fronts to albacore tuna (Thunnus alalunga) habitat in the Northeast Pacific Ocean?

    NASA Astrophysics Data System (ADS)

    Nieto, Karen; Xu, Yi; Teo, Steven L. H.; McClatchie, Sam; Holmes, John

    2017-01-01

    We used satellite sea surface temperature (SST) data to characterize coastal fronts and then tested the effects of the fronts and other environmental variables on the distribution of the albacore tuna (Thunnus alalunga) catches in the coastal areas (from the coast to 200 nm offshore) of the Northeast Pacific Ocean. A boosted regression tree (BRT) model was used to explain the spatial and temporal patterns in albacore tuna catch per unit effort (CPUE) (1988-2011), using frontal features (distance to the front and temperature gradient), and other environmental variables like SST, surface chlorophyll concentration (chlorophyll), and geostrophic currents as explanatory variables. Based on over two decades of high-resolution data, the modeled results confirmed previous findings that albacore CPUE distribution is strongly influenced by SST and chlorophyll at fishing locations, and the distance of fronts from the coast (DFRONT-COAST), albeit with substantial seasonal and interannual variation. Albacore CPUEs were higher near warm, low chlorophyll oceanic waters, and near SST fronts. We performed sequential leave-one-year-out cross-validations for all years and found that the relationships in the BRT models were robust for the entire study period. Spatial distributions of model-predicted albacore CPUE were similar to observations, but the model was unable to predict very high CPUEs in some areas. These results help to explain previously observed variability in albacore CPUE and will likely help improve international fisheries management in the face of environmental changes.

  20. The State and Future of the Primary Care Behavioral Health Model of Service Delivery Workforce.

    PubMed

    Serrano, Neftali; Cordes, Colleen; Cubic, Barbara; Daub, Suzanne

    2018-06-01

    The growth of the Primary Care Behavioral Health model (PCBH) nationally has highlighted and created a workforce development challenge given that most mental health professionals are not trained for primary care specialization. This work provides a review of the current efforts to retrain mental health professionals to fulfill roles as Behavioral Health Consultants (BHCs) including certificate programs, technical assistance programs, literature and on-the-job training, as well as detail the future needs of the workforce if the model is to sustainably proliferate. Eight recommendations are offered including: (1) the development of an interprofessional certification body for PCBH training criteria, (2) integration of PCBH model specific curricula in graduate studies, (3) integration of program development skill building in curricula, (4) efforts to develop faculty for PCBH model awareness, (5) intentional efforts to draw students to graduate programs for PCBH model training, (6) a national employment clearinghouse, (7) efforts to coalesce current knowledge around the provision of technical assistance to sites, and (8) workforce specific research efforts.

  1. Teacher–Student Support, Effortful Engagement, and Achievement: A 3-Year Longitudinal Study

    PubMed Central

    Hughes, Jan N.; Luo, Wen; Kwok, Oi-Man; Loyd, Linda K.

    2008-01-01

    Measures of teacher–student relationship quality (TSRQ), effortful engagement, and achievement in reading and math were collected once each year for 3 consecutive years, beginning when participants were in 1st grade, for a sample of 671 (53.1% male) academically at-risk children attending 1 of 3 school districts in Texas. In separate latent variable structural equation models, the authors tested the hypothesized model, in which Year 2 effortful engagement mediated the association between Year 1 TSRQ and Year 3 reading and math skills. Conduct engagement was entered as a covariate in these analyses to disentangle the effects of effortful engagement and conduct engagement. Reciprocal effects of effortful engagement on TSRQ and of achievement on effortful engagement were also modeled. Results generally supported the hypothesized model. Year 1 variables had a direct effect on Year 3 variables, above year-to-year stability. Findings suggest that achievement, effortful engagement, and TSRQ form part of a dynamic system of influences in the early grades, such that intervening at any point in this nexus may alter children’s school trajectories. PMID:19578558

  2. Using experimental human influenza infections to validate a viral dynamic model and the implications for prediction.

    PubMed

    Chen, S C; You, S H; Liu, C Y; Chio, C P; Liao, C M

    2012-09-01

    The aim of this work was to use experimental infection data of human influenza to assess a simple viral dynamics model in epithelial cells and better understand the underlying complex factors governing the infection process. The developed study model expands on previous reports of a target cell-limited model with delayed virus production. Data from 10 published experimental infection studies of human influenza was used to validate the model. Our results elucidate, mechanistically, the associations between epithelial cells, human immune responses, and viral titres and were supported by the experimental infection data. We report that the maximum total number of free virions following infection is 10(3)-fold higher than the initial introduced titre. Our results indicated that the infection rates of unprotected epithelial cells probably play an important role in affecting viral dynamics. By simulating an advanced model of viral dynamics and applying it to experimental infection data of human influenza, we obtained important estimates of the infection rate. This work provides epidemiologically meaningful results, meriting further efforts to understand the causes and consequences of influenza A infection.

  3. SLS Model Based Design: A Navigation Perspective

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  4. Effects of thermal blooming on systems comprised of tiled subapertures

    NASA Astrophysics Data System (ADS)

    Leakeas, Charles L.; Bartell, Richard J.; Krizo, Matthew J.; Fiorino, Steven T.; Cusumano, Salvatore J.; Whiteley, Matthew R.

    2010-04-01

    Laser weapon systems comprise of tiled subapertures are rapidly emerging in the directed energy community. The Air Force Institute of Technology Center for Directed Energy (AFIT/CDE), under sponsorship of the HEL Joint Technology Office has developed performance models of such laser weapon system configurations consisting of tiled arrays of both slab and fiber subapertures. These performance models are based on results of detailed waveoptics analyses conducted using WaveTrain. Previous performance model versions developed in this effort represent system characteristics such as subaperture shape, aperture fill factor, subaperture intensity profile, subaperture placement in the primary aperture, subaperture mutual coherence (piston), subaperture differential jitter (tilt), and beam quality wave-front error associated with each subaperture. The current work is a prerequisite for the development of robust performance models for turbulence and thermal blooming effects for tiled systems. Emphasis is placed on low altitude tactical scenarios. The enhanced performance model developed will be added to AFIT/CDE's HELEEOS parametric one-on-one engagement level model via the Scaling for High Energy Laser and Relay Engagement (SHaRE) toolbox.

  5. Galerkin Models Enhancements for Flow Control

    NASA Astrophysics Data System (ADS)

    Tadmor, Gilead; Lehmann, Oliver; Noack, Bernd R.; Morzyński, Marek

    Low order Galerkin models were originally introduced as an effective tool for stability analysis of fixed points and, later, of attractors, in nonlinear distributed systems. An evolving interest in their use as low complexity dynamical models, goes well beyond that original intent. It exposes often severe weaknesses of low order Galerkin models as dynamic predictors and has motivated efforts, spanning nearly three decades, to alleviate these shortcomings. Transients across natural and enforced variations in the operating point, unsteady inflow, boundary actuation and both aeroelastic and actuated boundary motion, are hallmarks of current and envisioned needs in feedback flow control applications, bringing these shortcomings to even higher prominence. Building on the discussion in our previous chapters, we shall now review changes in the Galerkin paradigm that aim to create a mathematically and physically consistent modeling framework, that remove what are otherwise intractable roadblocks. We shall then highlight some guiding design principles that are especially important in the context of these models. We shall continue to use the simple example of wake flow instabilities to illustrate the various issues, ideas and methods that will be discussed in this chapter.

  6. The Influence of the Green River Lake System on the Local Climate During the Early Eocene Period

    NASA Astrophysics Data System (ADS)

    Elguindi, N.; Thrasher, B.; Sloan, L. C.

    2006-12-01

    Several modeling efforts have attempted to reproduce the climate of the early Eocene North America. However when compared to proxy data, General Circulation Models (GCMs) tend to produce a large-scale cold-bias. Although higher resolution Regional Climate Models (RCMs) that are able to resolve many of the sub-GCM scale forcings improve this cold bias, RCMs are still unable to reproduce the warm climate of the Eocene. From geologic data, we know that the greater Green River and the Uinta basins were intermontane basins with a large lake system during portions of the Eocene. We speculate that the lack of presence of these lakes in previous modeling studies may explain part of the persistent cold-bias of GCMs and RCMs. In this study, we utilize a regional climate model coupled with a 1D-lake model in an attempt to reduce the uncertainties and biases associated with climate simulations over Eocene western North American. Specifically, we include the Green River Lake system in our RCM simulation and compare climates with and without lakes to proxy data.

  7. On the Relationship Between Effort Toward an Ongoing Task and Cue Detection in Event-Based Prospective Memory

    ERIC Educational Resources Information Center

    Marsh, Richard L.; Hicks, Jason L.; Cook, Gabriel I.

    2005-01-01

    In recent theories of event-based prospective memory, researchers have debated what degree of resources are necessary to identify a cue as related to a previously established intention. In order to simulate natural variations in attention, the authors manipulated effort toward an ongoing cognitive task in which intention-related cues were embedded…

  8. Using Common Assignments to Strengthen Teaching and Learning: Research on the Second Year of Implementation

    ERIC Educational Resources Information Center

    Reumann-Moore, Rebecca; Duffy, Mark

    2015-01-01

    Initiated for the 2013-14 school year, the Common Assignment Study (CAS) is a three-year effort being led by the Colorado Education Initiative (CEI) and The Fund for Transforming Education in Kentucky (The Fund) with support from the Bill & Melinda Gates Foundation. Conceptually, CAS builds on previous efforts to improve instruction through…

  9. Learner Characteristic Based Learning Effort Curve Mode: The Core Mechanism on Developing Personalized Adaptive E-Learning Platform

    ERIC Educational Resources Information Center

    Hsu, Pi-Shan

    2012-01-01

    This study aims to develop the core mechanism for realizing the development of personalized adaptive e-learning platform, which is based on the previous learning effort curve research and takes into account the learner characteristics of learning style and self-efficacy. 125 university students from Taiwan are classified into 16 groups according…

  10. Benign Weather Modification,

    DTIC Science & Technology

    1997-05-01

    with respect to weather modification. Publicizing these efforts is necessary in order to eliminate all traces of " cloak and dagger " efforts tainting...theater, the Japanese used the weather to conceal their approach to the Hawaiian Islands, enhancing their surprise attack on Pearl Harbor. There... attack is different than previous researcher goals. Therefore, future experiments would have to be tailored for the new objective of hiding military

  11. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns: (1) Often faults aren't addressed until nominal spacecraft design is fairly stable. (2) Design relegated to after-the-fact patchwork, Band-Aid approach. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition. Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions. New approaches could avoid many current pitfalls (3a) New FM architectures, including model-based approach integrated with NASA's MBSE (Model-Based System Engineering) efforts (3b) NASA's Office of the Chief Technologist: FM identified in seven of NASA's 14 Space Technology Roadmaps. Opportunity to coalesce and establish thrust area to progressively develop new FM techniques. FM Handbook will help ensure that future missions do not encounter same FM-related problems as previous missions. Version 1 of the FM Handbook is a good start: (1) Still need Version 2 Agency-wide FM Handbook to expand Handbook to other areas, especially crewed missions. (2) Still need to reach out to other organizations to develop common understanding and vocabulary. Handbook doesn't/can't address all Workshop recommendations. Still need to identify how to address programmatic and infrastructure issues.

  12. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  13. Estimation of Coastal Freshwater Discharge into Prince William Sound using a High-Resolution Hydrological Model

    NASA Astrophysics Data System (ADS)

    Beamer, J. P.; Hill, D. F.; Liston, G. E.; Arendt, A. A.; Hood, E. W.

    2013-12-01

    In Prince William Sound (PWS), Alaska, there is a pressing need for accurate estimates of the spatial and temporal variations in coastal freshwater discharge (FWD). FWD into PWS originates from streamflow due to rainfall, annual snowmelt, and changes in stored glacier mass and is important because it helps establish spatial and temporal patterns in ocean salinity and temperature, and is a time-varying boundary condition for oceanographic circulation models. Previous efforts to model FWD into PWS have been heavily empirical, with many physical processes absorbed into calibration coefficients that, in many cases, were calibrated to streams and rivers not hydrologically similar to those discharging into PWS. In this work we adapted and validated a suite of high-resolution (in space and time), physically-based, distributed weather, snowmelt, and runoff-routing models designed specifically for snow melt- and glacier melt-dominated watersheds like PWS in order to: 1) provide high-resolution, real-time simulations of snowpack and FWD, and 2) provide a record of historical variations of FWD. SnowModel, driven with gridded topography, land cover, and 32 years (1979-2011) of 3-hourly North American Regional Reanalysis (NARR) atmospheric forcing data, was used to simulate snowpack accumulation and melt across a PWS model domain. SnowModel outputs of daily snow water equivalent (SWE) depth and grid-cell runoff volumes were then coupled with HydroFlow, a runoff-routing model which routed snowmelt, glacier-melt, and rainfall to each watershed outlet (PWS coastline) in the simulation domain. The end product was a continuous 32-year simulation of daily FWD into PWS. In order to validate the models, SWE and snow depths from SnowModel were compared with observed SWE and snow depths from SnoTel and snow survey data, and discharge from HydroFlow was compared with observed streamflow measurements. As a second phase of this research effort, the coupled models will be set-up to run in real-time, where daily measurements from weather stations in the PWS will be used to drive simulations of snow cover and streamflow. In addition, we will deploy a strategic array of instrumentation aimed at validating the simulated weather estimates and the calculations of freshwater discharge. Upon successful implementation and validation of the modeling system, it will join established and ongoing computational and observational efforts that have a common goal of establishing a comprehensive understanding of the physical behavior of PWS.

  14. [Psychosocial factors at work and cardiovascular diseases: contribution of the Effort-Reward Imbalance model].

    PubMed

    Niedhammer, I; Siegrist, J

    1998-11-01

    The effect of psychosocial factors at work on health, especially cardiovascular health, has given rise to growing concern in occupational epidemiology over the last few years. Two theoretical models, Karasek's model and the Effort-Reward Imbalance model, have been developed to evaluate psychosocial factors at work within specific conceptual frameworks in an attempt to take into account the serious methodological difficulties inherent in the evaluation of such factors. Karasek's model, the most widely used model, measures three factors: psychological demands, decision latitude and social support at work. Many studies have shown the predictive effects of these factors on cardiovascular diseases independently of well-known cardiovascular risk factors. More recently, the Effort-Reward Imbalance model takes into account the role of individual coping characteristics which was neglected in the Karasek model. The effort-reward imbalance model focuses on the reciprocity of exchange in occupational life where high-cost/low-gain conditions are considered particularly stressful. Three dimensions of rewards are distinguished: money, esteem and gratifications in terms of promotion prospects and job security. Some studies already support that high-effort/low reward-conditions are predictive of cardiovascular diseases.

  15. Evaluation of simulated ocean carbon in the CMIP5 earth system models

    NASA Astrophysics Data System (ADS)

    Orr, James; Brockmann, Patrick; Seferian, Roland; Servonnat, Jérôme; Bopp, Laurent

    2013-04-01

    We maintain a centralized model output archive containing output from the previous generation of Earth System Models (ESMs), 7 models used in the IPCC AR4 assessment. Output is in a common format located on a centralized server and is publicly available through a web interface. Through the same interface, LSCE/IPSL has also made available output from the Coupled Model Intercomparison Project (CMIP5), the foundation for the ongoing IPCC AR5 assessment. The latter includes ocean biogeochemical fields from more than 13 ESMs. Modeling partners across 3 EU projects refer to the combined AR4-AR5 archive and comparison as OCMIP5, building on previous phases of OCMIP (Ocean Carbon Cycle Intercomparison Project) and making a clear link to IPCC AR5 (CMIP5). While now focusing on assessing the latest generation of results (AR5, CMIP5), this effort is also able to put them in context (AR4). For model comparison and evaluation, we have also stored computed derived variables (e.g., those needed to assess ocean acidification) and key fields regridded to a common 1°x1° grid, thus complementing the standard CMIP5 archive. The combined AR4-AR5 output (OCMIP5) has been used to compute standard quantitative metrics, both global and regional, and those have been synthesized with summary diagrams. In addition, for key biogeochemical fields we have deconvolved spatiotemporal components of the mean square error in order to constrain which models go wrong where. Here we will detail results from these evaluations which have exploited gridded climatological data. The archive, interface, and centralized evaluation provide a solid technical foundation, upon which collaboration and communication is being broadened in the ocean biogeochemical modeling community. Ultimately we aim to encourage wider use of the OCMIP5 archive.

  16. Hydrogen and Oxygen Isotope Ratios in Body Water and Hair: Modeling Isotope Dynamics in Nonhuman Primates

    PubMed Central

    O’Grady, Shannon P.; Valenzuela, Luciano O.; Remien, Christopher H.; Enright, Lindsey E.; Jorgensen, Matthew J.; Kaplan, Jay R.; Wagner, Janice D.; Cerling, Thure E.; Ehleringer, James R.

    2012-01-01

    The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water (2H/1H, 18O/16O expressed as δ2H and δ18O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ2H and δ18O values of body water and a second model to predict the δ2H and δ18O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ2H and δ18O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ2H and δ18O values of gut water and the 18O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (αow). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of αow was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. PMID:22553163

  17. Hydrogen and oxygen isotope ratios in body water and hair: modeling isotope dynamics in nonhuman primates.

    PubMed

    O'Grady, Shannon P; Valenzuela, Luciano O; Remien, Christopher H; Enright, Lindsey E; Jorgensen, Matthew J; Kaplan, Jay R; Wagner, Janice D; Cerling, Thure E; Ehleringer, James R

    2012-07-01

    The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water ((2)H/(1)H, (18)O/(16)O expressed as δ(2) H and δ(18)O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ(2)H and δ(18)O values of body water and a second model to predict the δ(2)H and δ(18)O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ(2)H and δ(18)O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ(2)H and δ(18)O values of gut water and the (18)O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (α(ow)). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of α(ow) was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. © 2012 Wiley Periodicals, Inc.

  18. Automotive Underhood Thermal Management Analysis Using 3-D Coupled Thermal-Hydrodynamic Computer Models: Thermal Radiation Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pannala, S; D'Azevedo, E; Zacharia, T

    The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% ofmore » the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of work in section F.« less

  19. Environmental factors and their role in community integration after spinal cord injury.

    PubMed

    Lysack, Cathy; Komanecky, Marie; Kabel, Allison; Cross, Katherine; Neufeld, Stewart

    2007-01-01

    The International Classification of Functioning, Disability and Health (ICF) model presents an opportunity to better understand previously neglected longterm social outcomes after traumatic spinal cord injury (SCI), especially the experience of participation. The study explored the relationship between perceived environmental barriers and perceived community integration (a participation proxy) in a sample of adults with traumatic SCI. The study interviewed African American and White women and men (n = 136) who had lived with SCI for an average of 11.5 years. Participants reported environmental barriers at twice the level indicated by previous studies; the natural environment and the policies of government were the most problematic. Levels of community integration were also high. Data suggest a significant relationship (p < .01) between perceived environmental barriers and community integration for adults with SCI, providing support for the ICF model. Improved measures and more sophisticated concepts and theories are needed to explicate the relationship between environmental factors and participation concepts in the ICE With respect to practice, occupational therapists need to be aware that removal of environmental barriers is only a first step in the more complex effort to facilitate optimal community integration after SCI.

  20. Tissue dielectric measurement using an interstitial dipole antenna.

    PubMed

    Wang, Peng; Brace, Christopher L

    2012-01-01

    The purpose of this study was to develop a technique to measure the dielectric properties of biological tissues with an interstitial dipole antenna based upon previous efforts for open-ended coaxial probes. The primary motivation for this technique is to facilitate treatment monitoring during microwave tumor ablation by utilizing the heating antenna without additional intervention or interruption of the treatment. The complex permittivity of a tissue volume surrounding the antenna was calculated from reflection coefficients measured after high-temperature microwave heating by using a rational function model of the antenna's input admittance. Three referencing liquids were needed for measurement calibration. The dielectric measurement technique was validated ex vivo in normal and ablated bovine livers. Relative permittivity and effective conductivity were lower in the ablation zone when compared to normal tissue, consistent with previous results. The dipole technique demonstrated a mean 10% difference of permittivity values when compared to open-ended coaxial cable measurements in the frequency range of 0.5-20 GHz. Variability in measured permittivities could be smoothed by fitting to a Cole-Cole dispersion model. Further development of this technique may facilitate real-time monitoring of microwave ablation treatments through the treatment applicator. © 2011 IEEE

  1. A CRISPR Path to Engineering New Genetic Mouse Models for Cardiovascular Research.

    PubMed

    Miano, Joseph M; Zhu, Qiuyu Martin; Lowenstein, Charles J

    2016-06-01

    Previous efforts to target the mouse genome for the addition, subtraction, or substitution of biologically informative sequences required complex vector design and a series of arduous steps only a handful of laboratories could master. The facile and inexpensive clustered regularly interspaced short palindromic repeats (CRISPR) method has now superseded traditional means of genome modification such that virtually any laboratory can quickly assemble reagents for developing new mouse models for cardiovascular research. Here, we briefly review the history of CRISPR in prokaryotes, highlighting major discoveries leading to its formulation for genome modification in the animal kingdom. Core components of CRISPR technology are reviewed and updated. Practical pointers for 2-component and 3-component CRISPR editing are summarized with many applications in mice including frameshift mutations, deletion of enhancers and noncoding genes, nucleotide substitution of protein-coding and gene regulatory sequences, incorporation of loxP sites for conditional gene inactivation, and epitope tag integration. Genotyping strategies are presented and topics of genetic mosaicism and inadvertent targeting discussed. Finally, clinical applications and ethical considerations are addressed as the biomedical community eagerly embraces this astonishing innovation in genome editing to tackle previously intractable questions. © 2016 American Heart Association, Inc.

  2. Tissue Dielectric Measurement Using an Interstitial Dipole Antenna

    PubMed Central

    Wang, Peng; Brace, Christopher L.

    2012-01-01

    The purpose of this study was to develop a technique to measure the dielectric properties of biological tissues with an interstitial dipole antenna based upon previous efforts for open-ended coaxial probes. The primary motivation for this technique is to facilitate treatment monitoring during microwave tumor ablation by utilizing the heating antenna without additional intervention or interruption of the treatment. The complex permittivity of a tissue volume surrounding the antenna was calculated from reflection coefficients measured after high-temperature microwave heating by using a rational function model of the antenna’s input admittance. Three referencing liquids were needed for measurement calibration. The dielectric measurement technique was validated ex vivo in normal and ablated bovine livers. Relative permittivity and effective conductivity were lower in the ablation zone when compared to normal tissue, consistent with previous results. The dipole technique demonstrated a mean 10% difference of permittivity values when compared to open-ended coaxial cable measurements in the frequency range of 0.5–20 GHz. Variability in measured permittivities could be smoothed by fitting to a Cole–Cole dispersion model. Further development of this technique may facilitate real-time monitoring of microwave ablation treatments through the treatment applicator. PMID:21914566

  3. A CRISPR Path to Engineering New Genetic Mouse Models for Cardiovascular Research

    PubMed Central

    Miano, Joseph M.; Zhu, Qiuyu Martin; Lowenstein, Charles J.

    2016-01-01

    Previous efforts to target the mouse genome for the addition, subtraction, or substitution of biologically informative sequences required complex vector design and a series of arduous steps only a handful of labs could master. The facile and inexpensive clustered regularly interspaced short palindromic repeats (CRISPR) method has now superseded traditional means of genome modification such that virtually any lab can quickly assemble reagents for developing new mouse models for cardiovascular research. Here we briefly review the history of CRISPR in prokaryotes, highlighting major discoveries leading to its formulation for genome modification in the animal kingdom. Core components of CRISPR technology are reviewed and updated. Practical pointers for two-component and three-component CRISPR editing are summarized with a number of applications in mice including frameshift mutations, deletion of enhancers and non-coding genes, nucleotide substitution of protein-coding and gene regulatory sequences, incorporation of loxP sites for conditional gene inactivation, and epitope tag integration. Genotyping strategies are presented and topics of genetic mosaicism and inadvertent targeting discussed. Finally, clinical applications and ethical considerations are addressed as the biomedical community eagerly embraces this astonishing innovation in genome editing to tackle previously intractable questions. PMID:27102963

  4. An Admittance Survey of Large Volcanoes on Venus: Implications for Volcano Growth

    NASA Technical Reports Server (NTRS)

    Brian, A. W.; Smrekar, S. E.; Stofan, E. R.

    2004-01-01

    Estimates of the thickness of the venusian crust and elastic lithosphere are important in determining the rheological and thermal properties of Venus. These estimates offer insights into what conditions are needed for certain features, such as large volcanoes and coronae, to form. Lithospheric properties for much of the large volcano population on Venus are not well known. Previous studies of elastic thickness (Te) have concentrated on individual or small groups of edifices, or have used volcano models and fixed values of Te to match with observations of volcano morphologies. In addition, previous studies use different methods to estimate lithospheric parameters meaning it is difficult to compare their results. Following recent global studies of the admittance signatures exhibited by the venusian corona population, we performed a similar survey into large volcanoes in an effort to determine the range of lithospheric parameters shown by these features. This survey of the entire large volcano population used the same method throughout so that all estimates could be directly compared. By analysing a large number of edifices and comparing our results to observations of their morphology and models of volcano formation, we can help determine the controlling parameters that govern volcano growth on Venus.

  5. The construction, fouling and enzymatic cleaning of a textile dye surface.

    PubMed

    Onaizi, Sagheer A; He, Lizhong; Middelberg, Anton P J

    2010-11-01

    The enzymatic cleaning of a rubisco protein stain bound onto Surface Plasmon Resonance (SPR) biosensor chips having a dye-bound upper layer is investigated. This novel method allowed, for the first time, a detailed kinetic study of rubisco cleanability (defined as fraction of adsorbed protein removed from a surface) from dyed surfaces (mimicking fabrics) at different enzyme concentrations. Analysis of kinetic data using an established mathematical model able to decouple enzyme transfer and reaction processes [Onaizi, He, Middelberg, Chem. Eng. Sci. 64 (2008) 3868] revealed a striking effect of dyeing on enzymatic cleaning performance. Specifically, the absolute rate constants for enzyme transfer to and from a dye-bound rubisco stain were significantly higher than reported previously for un-dyed surfaces. These increased transfer rates resulted in higher surface cleanability. Higher enzyme mobility (i.e., higher enzyme adsorption and desorption rates) at the liquid-dye interface was observed, consistent with previous suggestions that enzyme surface mobility is likely correlated with overall enzyme cleaning performance. Our results show that reaction engineering models of enzymatic action at surfaces may provide insight able to guide the design of better stain-resistant surfaces, and may also guide efforts to improve cleaning formulations. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Anonymous sources: where do adult β cells come from?

    PubMed Central

    German, Michael S.

    2013-01-01

    Evidence that the pool of insulin-producing β cells in the pancreas is reduced in both major forms of diabetes mellitus has led to efforts to understand β cell turnover in the adult pancreas. Unfortunately, previous studies have reached opposing conclusions regarding the source of new β cells during regeneration in the adult pancreas. In this issue of the JCI, Xiao et al. use a novel mouse model for detecting new β cells derived from non–β cells to demonstrate the absence of β cell neogenesis from non–β cells during normal postnatal growth and in models of β cell regeneration. This work adds to mounting evidence that in most physiological and pathological conditions, β cell neogenesis may not make large contributions to the postnatal β cell pool — at least not in rodents. PMID:23619356

  7. Parametric Modeling in Action: High Accuracy Seismology of Kepler DAV Stars

    NASA Astrophysics Data System (ADS)

    Giammichele, N.; Fontaine, G.; Charpinet, S.; Brassard, P.; Greiss, S.

    2015-06-01

    We summarize here the efforts made on the quantitative seismic analyses performed on two ZZ Ceti stars observed with the Kepler satellite. One of them, KIC 11911480, is located close to the blue edge of the instability strip, while the other, GD 1212, is found at the red edge. We emphasize the need for parameterized modeling and the forward approach to uniquely establish the fundamental parameters of the stars. We show how the internal structures as well as rotation profiles are unravelled to surprisingly large depths for degenerates such as ZZ Ceti stars, which further confirms the loss of stellar angular momentum before the white dwarf stage detected previously in GW Vir pulsating white dwarfs. This opens up interesting prospects for the new mission to come, Kepler-2, in the field of white dwarf asteroseismology.

  8. Appendices to the user's manual for a computer program for the emulation/simulation of a space station environmental control and life support system

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.

  9. The Genetic Architecture of Complex Traits in Teosinte (Zea mays ssp. parviglumis): New Evidence from Association Mapping

    USDA-ARS?s Scientific Manuscript database

    Our previous association analyses showed that variation at major regulatory genes contributes to standing variation for complex traits in Balsas teosinte, the progenitor of maize. This study expands our previous association mapping effort in teosinte by testing 123 markers in 52 candidate genes for ...

  10. Cognitive Load and Listening Effort: Concepts and Age-Related Considerations.

    PubMed

    Lemke, Ulrike; Besser, Jana

    2016-01-01

    Listening effort has been recognized as an important dimension of everyday listening, especially with regard to the comprehension of spoken language. At constant levels of comprehension performance, the level of effort exerted and perceived during listening can differ considerably across listeners and situations. In this article, listening effort is used as an umbrella term for two different types of effort that can arise during listening. One of these types is processing effort, which is used to denote the utilization of "extra" mental processing resources in listening conditions that are adverse for an individual. A conceptual description is introduced how processing effort could be defined in terms of situational influences, the listener's auditory and cognitive resources, and the listener's personal state. Also, the proposed relationship between processing effort and subjectively perceived listening effort is discussed. Notably, previous research has shown that the availability of mental resources, as well as the ability to use them efficiently, changes over the course of adult aging. These common age-related changes in cognitive abilities and their neurocognitive organization are discussed in the context of the presented concept, especially regarding situations in which listening effort may be increased for older people.

  11. Predicting multi-level drug response with gene expression profile in multiple myeloma using hierarchical ordinal regression.

    PubMed

    Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo

    2018-05-10

    Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.

  12. Thermal safety of ultrasound-enhanced ocular drug delivery: A modeling study

    PubMed Central

    Nabili, Marjan; Geist, Craig; Zderic, Vesna

    2015-01-01

    Purpose: Delivery of sufficient amounts of therapeutic drugs into the eye for treatment of various ocular diseases is often a challenging task. Ultrasound was shown to be effective in enhancing ocular drug delivery in the authors’ previous in vitro and in vivo studies. Methods: The study reported here was designed to investigate the safety of ultrasound application and its potential thermal effects in the eye using PZFlex modeling software. The safety limit in this study was set as a temperature increase of no more than 1.5 °C based on regulatory recommendations and previous experimental safety studies. Acoustic and thermal specifications of different human eye tissues were obtained from the published literature. The tissues of particular interest in this modeling safety study were cornea, lens, and the location of optic nerve in the posterior eye. Ultrasound application was modeled at frequencies of 400 kHz–1 MHz, intensities of 0.3–1 W/cm2, and exposure duration of 5 min, which were the parameters used in the authors’ previous drug delivery experiments. The baseline eye temperature was 37 °C. Results: The authors’ results showed that the maximal tissue temperatures after 5 min of ultrasound application were 38, 39, 39.5, and 40 °C in the cornea, 39.5, 40, 42, and 43 °C in the center of the lens, and 37.5, 38.5, and 39 °C in the back of the eye (at the optic nerve location) at frequencies of 400, 600, 800 kHz, and 1 MHz, respectively. Conclusions: The ocular temperatures reached at higher frequencies were considered unsafe based on current recommendations. At a frequency of 400 kHz and intensity of 0.8 W/cm2 (parameters shown in the authors’ previous in vivo studies to be optimal for ocular drug delivery), the temperature increase was small enough to be considered safe inside different ocular tissues. However, the impact of orbital bone and tissue perfusion should be included in future modeling efforts to determine the safety of this method in the whole orbit especially regarding potential adverse optic nerve heating at the location of the bone. PMID:26429235

  13. Sources of Sahelian-Sudan moisture: Insights from a moisture-tracing atmospheric model

    NASA Astrophysics Data System (ADS)

    Salih, Abubakr A. M.; Zhang, Qiong; Pausata, Francesco S. R.; Tjernström, Michael

    2016-07-01

    The summer rainfall across Sahelian-Sudan is one of the main sources of water for agriculture, human, and animal needs. However, the rainfall is characterized by large interannual variability, which has attracted extensive scientific efforts to understand it. This study attempts to identify the source regions that contribute to the Sahelian-Sudan moisture budget during July through September. We have used an atmospheric general circulation model with an embedded moisture-tracing module (Community Atmosphere Model version 3), forced by observed (1979-2013) sea-surface temperatures. The result suggests that about 40% of the moisture comes with the moisture flow associated with the seasonal migration of the Intertropical Convergence Zone (ITCZ) and originates from Guinea Coast, central Africa, and the Western Sahel. The Mediterranean Sea, Arabian Peninsula, and South Indian Ocean regions account for 10.2%, 8.1%, and 6.4%, respectively. Local evaporation and the rest of the globe supply the region with 20.3% and 13.2%, respectively. We also compared the result from this study to a previous analysis that used the Lagrangian model FLEXPART forced by ERA-Interim. The two approaches differ when comparing individual regions, but are in better agreement when neighboring regions of similar atmospheric flow features are grouped together. Interannual variability with the rainfall over the region is highly correlated with contributions from regions that are associated with the ITCZ movement, which is in turn linked to the Atlantic Multidecadal Oscillation. Our result is expected to provide insights for the effort on seasonal forecasting of the rainy season over Sahelian Sudan.

  14. Pathogen-Host Associations and Predicted Range Shifts of Human Monkeypox in Response to Climate Change in Central Africa

    PubMed Central

    Thomassen, Henri A.; Fuller, Trevon; Asefi-Najafabady, Salvi; Shiplacoff, Julia A. G.; Mulembakani, Prime M.; Blumberg, Seth; Johnston, Sara C.; Kisalu, Neville K.; Kinkela, Timothée L.; Fair, Joseph N.; Wolfe, Nathan D.; Shongo, Robert L.; LeBreton, Matthew; Meyer, Hermann; Wright, Linda L.; Muyembe, Jean-Jacques; Buermann, Wolfgang; Okitolonda, Emile; Hensley, Lisa E.; Lloyd-Smith, James O.; Smith, Thomas B.; Rimoin, Anne W.

    2013-01-01

    Climate change is predicted to result in changes in the geographic ranges and local prevalence of infectious diseases, either through direct effects on the pathogen, or indirectly through range shifts in vector and reservoir species. To better understand the occurrence of monkeypox virus (MPXV), an emerging Orthopoxvirus in humans, under contemporary and future climate conditions, we used ecological niche modeling techniques in conjunction with climate and remote-sensing variables. We first created spatially explicit probability distributions of its candidate reservoir species in Africa's Congo Basin. Reservoir species distributions were subsequently used to model current and projected future distributions of human monkeypox (MPX). Results indicate that forest clearing and climate are significant driving factors of the transmission of MPX from wildlife to humans under current climate conditions. Models under contemporary climate conditions performed well, as indicated by high values for the area under the receiver operator curve (AUC), and tests on spatially randomly and non-randomly omitted test data. Future projections were made on IPCC 4th Assessment climate change scenarios for 2050 and 2080, ranging from more conservative to more aggressive, and representing the potential variation within which range shifts can be expected to occur. Future projections showed range shifts into regions where MPX has not been recorded previously. Increased suitability for MPX was predicted in eastern Democratic Republic of Congo. Models developed here are useful for identifying areas where environmental conditions may become more suitable for human MPX; targeting candidate reservoir species for future screening efforts; and prioritizing regions for future MPX surveillance efforts. PMID:23935820

  15. Numerical Modeling of ROM Panel Closures at WIPP

    NASA Astrophysics Data System (ADS)

    Herrick, C. G.

    2016-12-01

    The Waste Isolation Pilot Plant (WIPP) in New Mexico is a U.S. DOE geologic repository for permanent disposal of defense-related transuranic (TRU) waste. Waste is emplaced in panels excavated in a bedded salt formation (Salado Fm.) at 655 m bgs. In 2014 the U.S. EPA approved the new Run-of-Mine Panel Closure System (ROMPCS) for WIPP. The closure system consists of 100 feet of run-of-mine (ROM) salt sandwiched between two barriers. Nuclear Waste Partnership LLC (the M&O contractor for WIPP) initiated construction of the ROMPCS. The design calls for three horizontal ROM salt layers at different compaction levels ranging from 70-85% intact salt density. Due to panel drift size constraints and equipment availability the design was modified. Three prototype panel closures were constructed: two having two layers of compacted ROM salt (one closure had 1% water added) and a third consisting of simply ROM salt with no layering or added water. Sampling of the prototype ROMPCS layers was conducted to determine the following ROM salt parameters: thickness, moisture content, emplaced density, and grain-size distribution. Previous modeling efforts were performed without knowledge of these ROM salt parameters. This modeling effort incorporates them. The program-accepted multimechanism deformation model is used to model intact salt room creep closure. An advanced crushed salt model is used to model the ROM salt. Comparison of the two models' results with the prototypes' behavior is given. Our goal is to develop a realistic, reliable model that can be used for ROM salt applications at WIPP. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U. S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy SAND2016-7259A

  16. Revision of empirical electric field modeling in the inner magnetosphere using Cluster data

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Torbert, R. B.; Spence, H. E.; Khotyaintsev, Yu. V.; Lindqvist, P.-A.

    2013-07-01

    Using Cluster data from the Electron Drift (EDI) and the Electric Field and Wave (EFW) instruments, we revise our empirically-based, inner-magnetospheric electric field (UNH-IMEF) model at 22.662 mV/m; Kp<1, 1≤Kp<2, 2≤Kp<3, 3≤Kp<4, 4≤Kp<5, and Kp≥4+. Patterns consist of one set of data and processing for smaller activities, and another for higher activities. As activity increases, the skewed potential contour related to the partial ring current appears on the nightside. With the revised analysis, we find that the skewed potential contours get clearer and potential contours get denser on the nightside and morningside. Since the fluctuating components are not negligible, standard deviations from the modeled values are included in the model. In this study, we perform validation of the derived model more extensively. We find experimentally that the skewed contours are located close to the last closed equipotential, consistent with previous theories. This gives physical context to our model and serves as one validation effort. As another validation effort, the derived results are compared with other models/measurements. From these comparisons, we conclude that our model has some clear advantages over the others.

  17. The potential distribution of the Russian wheat aphid (Diuraphis noxia): an updated distribution model including irrigation improves model fit for predicting potential spread.

    PubMed

    Avila, G A; Davidson, M; van Helden, M; Fagan, L

    2018-04-18

    Diuraphis noxia (Kurdjumov), Russian wheat aphid, is one of the world's most invasive and economically important agricultural pests of wheat and barley. In May 2016, it was found for the first time in Australia, with further sampling confirming it was widespread throughout south-eastern regions. Russian wheat aphid is not yet present in New Zealand. The impacts of this pest if it establishes in New Zealand, could result in serious control problems in wheat- and barley-growing regions. To evaluate whether D. noxia could establish populations in New Zealand we used the climate modelling software CLIMEX to locate where potential viable populations might occur. We re-parameterised the existing CLIMEX model by Hughes and Maywald (1990) by improving the model fit using currently known distribution records of D. noxia, and we also considered the role of irrigation into the potential spread of this invasive insect. The updated model now fits the current known distribution better than the previous Hughes and Maywald CLIMEX model, particularly in temperate and Mediterranean areas in Australia and Europe; and in more semi-arid areas in north-western China and Middle Eastern countries. Our model also highlights new climatically suitable areas for the establishment of D. noxia, not previously reported, including parts of France, the UK and New Zealand. Our results suggest that, when suitable host plants are present, Russian wheat aphid could establish in these regions. The new CLIMEX projections in the present study are useful tools to inform risk assessments and target surveillance and monitoring efforts for identifying susceptible areas to invasion by Russian wheat aphid.

  18. Nitrogen feedbacks increase future terrestrial ecosystem carbon uptake in an individual-based dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Wårlind, D.; Smith, B.; Hickler, T.; Arneth, A.

    2014-11-01

    Recently a considerable amount of effort has been put into quantifying how interactions of the carbon and nitrogen cycle affect future terrestrial carbon sinks. Dynamic vegetation models, representing the nitrogen cycle with varying degree of complexity, have shown diverging constraints of nitrogen dynamics on future carbon sequestration. In this study, we use LPJ-GUESS, a dynamic vegetation model employing a detailed individual- and patch-based representation of vegetation dynamics, to evaluate how population dynamics and resource competition between plant functional types, combined with nitrogen dynamics, have influenced the terrestrial carbon storage in the past and to investigate how terrestrial carbon and nitrogen dynamics might change in the future (1850 to 2100; one representative "business-as-usual" climate scenario). Single-factor model experiments of CO2 fertilisation and climate change show generally similar directions of the responses of C-N interactions, compared to the C-only version of the model as documented in previous studies using other global models. Under an RCP 8.5 scenario, nitrogen limitation suppresses potential CO2 fertilisation, reducing the cumulative net ecosystem carbon uptake between 1850 and 2100 by 61%, and soil warming-induced increase in nitrogen mineralisation reduces terrestrial carbon loss by 31%. When environmental changes are considered conjointly, carbon sequestration is limited by nitrogen dynamics up to the present. However, during the 21st century, nitrogen dynamics induce a net increase in carbon sequestration, resulting in an overall larger carbon uptake of 17% over the full period. This contrasts with previous results with other global models that have shown an 8 to 37% decrease in carbon uptake relative to modern baseline conditions. Implications for the plausibility of earlier projections of future terrestrial C dynamics based on C-only models are discussed.

  19. A Dictionary of Acquisition and Contracting Terms

    DTIC Science & Technology

    1990-12-01

    consolidated national effort had been undertaken in this regard. Various individuals, commands and schools have attempted to assemble elements of definitions...however, the lack of a consolidated effort has caused a disparity in the definitions of terms. Previous graduate theses have researched definitions and...Supply Corps (SC), United States Navy (USN) initiated the consolidation of baseline consensus definitions in 1988. In 1989, Captain (CPT) John Cannaday

  20. An Investigation into the State of Status Planning of Tiv Language of Central Nigeria

    ERIC Educational Resources Information Center

    Terkimbi, Atonde

    2016-01-01

    The Tiv language is one of the major languages spoken in central Nigeria. The language is of the Benue-Congo subclass of the Bantu parent family. It has over four million speakers spoken in five states of Nigeria. The language like many other Nigerian languages is in dire need of language planning efforts and strategies. Some previous efforts were…

  1. The Mental Effort-Reward Imbalances Model and Its Implications for Behaviour Management

    ERIC Educational Resources Information Center

    Poulton, Alison; Whale, Samina; Robinson, Joanne

    2016-01-01

    Attention deficit hyperactivity disorder (ADHD) is frequently associated with oppositional defiant disorder (ODD). The Mental Effort Reward Imbalances Model (MERIM) explains this observational association as follows: in ADHD a disproportionate level of mental effort is required for sustaining concentration for achievement; in ODD the subjective…

  2. Controlling for varying effort in count surveys --an analysis of Christmas Bird Count Data

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1999-01-01

    The Christmas Bird Count (CBC) is a valuable source of information about midwinter populations of birds in the continental U.S. and Canada. Analysis of CBC data is complicated by substantial variation among sites and years in effort expended in counting; this feature of the CBC is common to many other wildlife surveys. Specification of a method for adjusting counts for effort is a matter of some controversy. Here, we present models for longitudinal count surveys with varying effort; these describe the effect of effort as proportional to exp(B effortp), where B and p are parameters. For any fixed p, our models are loglinear in the transformed explanatory variable (effort)p and other covariables. Hence we fit a collection of loglinear models corresponding to a range of values of p, and select the best effort adjustment from among these on the basis of fit statistics. We apply this procedure to data for six bird species in five regions, for the period 1959-1988.

  3. Modifying cochlear implant design: advantages of placing a return electrode in the modiolus.

    PubMed

    Ho, Steven Y; Wiet, Richard J; Richter, Claus-Peter

    2004-07-01

    A modiolar return electrode significantly increases the current flow across spiral ganglion cells into the modiolus, and may decrease the cochlear implant's power requirements. Ideal cochlear implants should maximize current flow into the modiolus to stimulate auditory neurons. Previous efforts to facilitate current flow through the modiolus included the fabrication and use of precurved electrodes designed to "hug" the modiolus and silastic positioners designed to place the electrodes closer to the modiolus. In contrast to earlier efforts, this study explores the effects of return electrode placement on current distributions in the modiolus. The effects of return electrode positioning on current flow in the modiolus were studied in a Plexiglas model of the cochlea. Results of model measurements were confirmed by measurements in the modiolus of human temporal bones. The return electrode was placed either within the modiolus, or remotely, outside the temporal bone, simulating contemporary cochlear implant configurations using monopolar stimulation. Cochlear model results clearly show that modiolar current amplitudes can be influenced significantly by the location of the return electrode, being larger when placed into the modiolus. Temporal bone data show similar findings. Voltages recorded in the modiolus are, on average, 2.8 times higher with the return electrode in the modiolus compared with return electrode locations outside the temporal bone. Placing a cochlear implant's return electrode in the modiolus should significantly reduce its power consumption. Reducing power requirements should lead to improved efficiency, safer long-term use, and longer device life.

  4. Application of a model for delivering occupational safety and health to smaller businesses: Case studies from the US

    PubMed Central

    Cunningham, Thomas R.; Sinclair, Raymond

    2015-01-01

    Smaller firms are the majority in every industry in the US, and they endure a greater burden of occupational injuries, illnesses, and fatalities than larger firms. Smaller firms often lack the necessary resources for effective occupational safety and health activities, and many require external assistance with safety and health programming. Based on previous work by researchers in Europe and New Zealand, NIOSH researchers developed for occupational safety and health intervention in small businesses. This model was evaluated with several intermediary organizations. Four case studies which describe efforts to reach small businesses with occupational safety and health assistance include the following: trenching safety training for construction, basic compliance and hazard recognition for general industry, expanded safety and health training for restaurants, and fall prevention and respirator training for boat repair contractors. Successful efforts included participation by the initiator among the intermediaries’ planning activities, alignment of small business needs with intermediary offerings, continued monitoring of intermediary activities by the initiator, and strong leadership for occupational safety and health among intermediaries. Common challenges were a lack of resources among intermediaries, lack of opportunities for in-person meetings between intermediaries and the initiator, and balancing the exchanges in the initiator–intermediary–small business relationships. The model offers some encouragement that initiator organizations can contribute to sustainable OSH assistance for small firms, but they must depend on intermediaries who have compatible interests in smaller businesses and they must work to understand the small business social system. PMID:26300585

  5. Utilization of satellite cloud information to diagnose the energy state and transformations in extratropical cyclones

    NASA Technical Reports Server (NTRS)

    Smith, P. J.

    1985-01-01

    An important component of the research was a continuing investigation of the impact of latent release on extratropical cyclone development. Previous efforts to accomplish this task have focused on the energy balance and the vertical motion field of an intense winter extratropical cyclone over the United States. During this fiscal year researchers turned their attention to a more fundamental diagnostic variable, the height tendency. Central to this effort is the use of a modified form of the quasi-geostrophic height tendency equation, in which geostrophic wind components have been replaced by observed winds and a latent heat release term has been added. This methodology was adopted to produce a simple diagnostic model which retains the essential mechanisms of quasi-geostrophic theory but more faithfully describes observed wave development when the Rossby Number approaches and exceeds 0.5. Results to date indicate that the new model yields height tendencies that are superior to those obtained from the quasi-geostrophic formulation and are sufficiently close to the observed tendencies to be a useful tool for diagnosing the principle large-scale forcing mechanisms in th e700-300 mb layer. Of the three forcing terms included in the new model, vortity advection is in general dominant. The most persistent challenge to this dominance is made by the thermal advection. On the whole, latent heat release plays a secondary role. Finally, during the rapid intensification observed for this cyclone, all three processes complement each other in forcing height falls.

  6. Refocusing International Astronomy Education Research Using a Cognitive Focus

    NASA Astrophysics Data System (ADS)

    Slater, Timothy F.; Slater, Stephanie J.

    2015-08-01

    For over 40 years, the international astronomy education community has given its attention to cataloging the substantial body of "misconceptions" in individual's thinking about astronomy, and to addressing the consequences of those misconceptions in the science classroom. Despite the tremendous amount of effort given to researching and disseminating information related to misconceptions, and the development of a theory of conceptual change to mitigate misconceptions, progress continues to be less than satisfying. An analysis of the literature and our own research has motivated the CAPER Center for Astronomy & Physics Education Research to advance a new model that allowing us to operate on students' astronomical learning difficulties in a more fruitful manner. Previously, much of the field's work binned erroneous student thinking into a single construct, and from that basis, curriculum developers and instructors addressed student misconceptions with a single instructional strategy. In contrast this model suggests that "misconceptions" are a mixture of at least four learning barriers: incorrect factual information, inappropriately applied mental algorithms (e.g., phenomenological primitives), insufficient cognitive structures (e.g., spatial reasoning), and affective/emotional difficulties. Each of these types of barriers should be addressed with an appropriately designed instructional strategy. Initial applications of this model to learning problems in astronomy and the space sciences have been fruitful, suggesting that an effort towards categorizing persistent learning difficulties in astronomy beyond the level of "misconceptions" may allow our community to craft tailored and more effective learning experiences for our students and the general public.

  7. A systems biology approach to reconcile metabolic network models with application to Synechocystis sp. PCC 6803 for biofuel production.

    PubMed

    Mohammadi, Reza; Fallah-Mehrabadi, Jalil; Bidkhori, Gholamreza; Zahiri, Javad; Javad Niroomand, Mohammad; Masoudi-Nejad, Ali

    2016-07-19

    Production of biofuels has been one of the promising efforts in biotechnology in the past few decades. The perspective of these efforts can be reduction of increasing demands for fossil fuels and consequently reducing environmental pollution. Nonetheless, most previous approaches did not succeed in obviating many big challenges in this way. In recent years systems biology with the help of microorganisms has been trying to overcome these challenges. Unicellular cyanobacteria are widespread phototrophic microorganisms that have capabilities such as consuming solar energy and atmospheric carbon dioxide for growth and thus can be a suitable chassis for the production of valuable organic materials such as biofuels. For the ultimate use of metabolic potential of cyanobacteria, it is necessary to understand the reactions that are taking place inside the metabolic network of these microorganisms. In this study, we developed a Java tool to reconstruct an integrated metabolic network of a cyanobacterium (Synechocystis sp. PCC 6803). We merged three existing reconstructed metabolic networks of this microorganism. Then, after modeling for biofuel production, the results from flux balance analysis (FBA) disclosed an increased yield in biofuel production for ethanol, isobutanol, 3-methyl-1-butanol, 2-methyl-1-butanol, and propanol. The numbers of blocked reactions were also decreased for 2-methyl-1-butanol production. In addition, coverage of the metabolic network in terms of the number of metabolites and reactions was increased in the new obtained model.

  8. A reduced successive quadratic programming strategy for errors-in-variables estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.

    Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less

  9. Application of a model for delivering occupational safety and health to smaller businesses: Case studies from the US.

    PubMed

    Cunningham, Thomas R; Sinclair, Raymond

    2015-01-01

    Smaller firms are the majority in every industry in the US, and they endure a greater burden of occupational injuries, illnesses, and fatalities than larger firms. Smaller firms often lack the necessary resources for effective occupational safety and health activities, and many require external assistance with safety and health programming. Based on previous work by researchers in Europe and New Zealand, NIOSH researchers developed for occupational safety and health intervention in small businesses. This model was evaluated with several intermediary organizations. Four case studies which describe efforts to reach small businesses with occupational safety and health assistance include the following: trenching safety training for construction, basic compliance and hazard recognition for general industry, expanded safety and health training for restaurants, and fall prevention and respirator training for boat repair contractors. Successful efforts included participation by the initiator among the intermediaries' planning activities, alignment of small business needs with intermediary offerings, continued monitoring of intermediary activities by the initiator, and strong leadership for occupational safety and health among intermediaries. Common challenges were a lack of resources among intermediaries, lack of opportunities for in-person meetings between intermediaries and the initiator, and balancing the exchanges in the initiator-intermediary-small business relationships. The model offers some encouragement that initiator organizations can contribute to sustainable OSH assistance for small firms, but they must depend on intermediaries who have compatible interests in smaller businesses and they must work to understand the small business social system.

  10. The Surge, Wave, and Tide Hydrodynamics (SWaTH) network of the U.S. Geological Survey—Past and future implementation of storm-response monitoring, data collection, and data delivery

    USGS Publications Warehouse

    Verdi, Richard J.; Lotspeich, R. Russell; Robbins, Jeanne C.; Busciolano, Ronald J.; Mullaney, John R.; Massey, Andrew J.; Banks, William S.; Roland, Mark A.; Jenter, Harry L.; Peppler, Marie C.; Suro, Thomas P.; Schubert, Christopher E.; Nardi, Mark R.

    2017-06-20

    After Hurricane Sandy made landfall along the northeastern Atlantic coast of the United States on October 29, 2012, the U.S. Geological Survey (USGS) carried out scientific investigations to assist with protecting coastal communities and resources from future flooding. The work included development and implementation of the Surge, Wave, and Tide Hydrodynamics (SWaTH) network consisting of more than 900 monitoring stations. The SWaTH network was designed to greatly improve the collection and timely dissemination of information related to storm surge and coastal flooding. The network provides a significant enhancement to USGS data-collection capabilities in the region impacted by Hurricane Sandy and represents a new strategy for observing and monitoring coastal storms, which should result in improved understanding, prediction, and warning of storm-surge impacts and lead to more resilient coastal communities.As innovative as it is, SWaTH evolved from previous USGS efforts to collect storm-surge data needed by others to improve storm-surge modeling, warning, and mitigation. This report discusses the development and implementation of the SWaTH network, and some of the regional stories associated with the landfall of Hurricane Sandy, as well as some previous events that informed the SWaTH development effort. Additional discussions on the mechanics of inundation and how the USGS is working with partners to help protect coastal communities from future storm impacts are also included.

  11. Genome-wide survey of single-nucleotide polymorphisms reveals fine-scale population structure and signs of selection in the threatened Caribbean elkhorn coral, Acropora palmata

    PubMed Central

    2017-01-01

    The advent of next-generation sequencing tools has made it possible to conduct fine-scale surveys of population differentiation and genome-wide scans for signatures of selection in non-model organisms. Such surveys are of particular importance in sharply declining coral species, since knowledge of population boundaries and signs of local adaptation can inform restoration and conservation efforts. Here, we use genome-wide surveys of single-nucleotide polymorphisms in the threatened Caribbean elkhorn coral, Acropora palmata, to reveal fine-scale population structure and infer the major barrier to gene flow that separates the eastern and western Caribbean populations between the Bahamas and Puerto Rico. The exact location of this break had been subject to discussion because two previous studies based on microsatellite data had come to differing conclusions. We investigate this contradiction by analyzing an extended set of 11 microsatellite markers including the five previously employed and discovered that one of the original microsatellite loci is apparently under selection. Exclusion of this locus reconciles the results from the SNP and the microsatellite datasets. Scans for outlier loci in the SNP data detected 13 candidate loci under positive selection, however there was no correlation between available environmental parameters and genetic distance. Together, these results suggest that reef restoration efforts should use local sources and utilize existing functional variation among geographic regions in ex situ crossing experiments to improve stress resistance of this species. PMID:29181279

  12. Overview of NASA MSFC and UAH Space Weather Modeling and Data Efforts

    NASA Technical Reports Server (NTRS)

    Parker, Linda Neergaard

    2016-01-01

    Marshall Space Flight Center, along with its industry and academia neighbors, has a long history of space environment model development and testing. Space weather efforts include research, testing, model development, environment definition, anomaly investigation, and operational support. This presentation will highlight a few of the current space weather activities being performed at Marshall and through collaborative efforts with University of Alabama in Huntsville scientists.

  13. A Spatial Modeling Approach to Predicting the Secondary Spread of Invasive Species Due to Ballast Water Discharge

    PubMed Central

    Sieracki, Jennifer L.; Bossenbroek, Jonathan M.; Chadderton, W. Lindsay

    2014-01-01

    Ballast water in ships is an important contributor to the secondary spread of invasive species in the Laurentian Great Lakes. Here, we use a model previously created to determine the role ballast water management has played in the secondary spread of viral hemorrhagic septicemia virus (VHSV) to identify the future spread of one current and two potential invasive species in the Great Lakes, the Eurasian Ruffe (Gymnocephalus cernuus), killer shrimp (Dikerogammarus villosus), and golden mussel (Limnoperna fortunei), respectively. Model predictions for Eurasian Ruffe have been used to direct surveillance efforts within the Great Lakes and DNA evidence of ruffe presence was recently reported from one of three high risk port localities identified by our model. Predictions made for killer shrimp and golden mussel suggest that these two species have the potential to become rapidly widespread if introduced to the Great Lakes, reinforcing the need for proactive ballast water management. The model used here is flexible enough to be applied to any species capable of being spread by ballast water in marine or freshwater ecosystems. PMID:25470822

  14. Oxidative DNA damage background estimated by a system model of base excision repair

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, B A; Wilson, III, D M

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less

  15. Shock, release and reshock of PBX 9502: experiments and modeling

    NASA Astrophysics Data System (ADS)

    Aslam, Tariq; Gustavsen, Richard; Whitworh, Nicholas; Menikoff, Ralph; Tarver, Craig; Handley, Caroline; Bartram, Brian

    2017-06-01

    We examine shock, release and reshock into the tri-amino-tri-nitro-benzene (TATB) based explosive PBX 9502 (95% TATB, 5% Kel-F 800) from both an experimental and modeling point of view. The experiments are performed on the 2-stage light gas gun at Los Alamos National Laboratory and are composed of a multi-layered impactor impinging on PBX 9502 backed by a polymethylmethacrylate window. The objective is to initially shock the PBX 9502 in the 7 GPa range (too weak to start significant reaction), then allow a rarefaction fan to release the material to a lower pressure/temperature state. Following this release, a strong second shock will recompress the PBX. If the rarefaction fan releases the PBX to a very low pressure, the ensuing second shock can increase the entropy and temperature substantially more than in previous double-shock experiments without an intermediate release. Predictions from a variety of reactive burn models (AWSD, CREST, Ignition and Growth, SURF) demonstrate significantly different behaviors and thus the experiments are an excellent validation test of the models, and may suggest improvements for subsequent modeling efforts.

  16. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    PubMed

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  17. Hiding the system from the user: Moving from complex mental models to elegant metaphors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis W. Nielsen; David J. Bruemmer

    2007-08-01

    In previous work, increased complexity of robot behaviors and the accompanying interface design often led to operator confusion and/or a fight for control between the robot and operator. We believe the reason for the conflict was that the design of the interface and interactions presented too much of the underlying robot design model to the operator. Since the design model includes the implementation of sensors, behaviors, and sophisticated algorithms, the result was that the operator’s cognitive efforts were focused on understanding the design of the robot system as opposed to focusing on the task at hand. This paper illustrates howmore » this very problem emerged at the INL and how the implementation of new metaphors for interaction has allowed us to hide the design model from the user and allow the user to focus more on the task at hand. Supporting the user’s focus on the task rather than on the design model allows increased use of the system and significant performance improvement in a search task with novice users.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Li, Tingwen

    In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less

  19. Predicting Homework Effort: Support for a Domain-Specific, Multilevel Homework Model

    ERIC Educational Resources Information Center

    Trautwein, Ulrich; Ludtke, Oliver; Schnyder, Inge; Niggli, Alois

    2006-01-01

    According to the domain-specific, multilevel homework model proposed in the present study, students' homework effort is influenced by expectancy and value beliefs, homework characteristics, parental homework behavior, and conscientiousness. The authors used structural equation modeling and hierarchical linear modeling analyses to test the model in…

  20. Aerosol-Radiation-Cloud Interactions in the South-East Atlantic: Model-Relevant Observations and the Beneficiary Modeling Efforts in the Realm of the EVS-2 Project ORACLES

    NASA Technical Reports Server (NTRS)

    Redemann, Jens

    2018-01-01

    Globally, aerosols remain a major contributor to uncertainties in assessments of anthropogenically-induced changes to the Earth climate system, despite concerted efforts using satellite and suborbital observations and increasingly sophisticated models. The quantification of direct and indirect aerosol radiative effects, as well as cloud adjustments thereto, even at regional scales, continues to elude our capabilities. Some of our limitations are due to insufficient sampling and accuracy of the relevant observables, under an appropriate range of conditions to provide useful constraints for modeling efforts at various climate scales. In this talk, I will describe (1) the efforts of our group at NASA Ames to develop new airborne instrumentation to address some of the data insufficiencies mentioned above; (2) the efforts by the EVS-2 ORACLES project to address aerosol-cloud-climate interactions in the SE Atlantic and (3) time permitting, recent results from a synergistic use of A-Train aerosol data to test climate model simulations of present-day direct radiative effects in some of the AEROCOM phase II global climate models.

Top