Science.gov

Sample records for adjusted model including

  1. Carburetion system including an adjustable throttle linkage

    SciTech Connect

    Du Bois, C.G.; Falig, J.D.

    1986-03-25

    A throttle linkage assembly is described comprising a throttle shaft rotatable about a throttle shaft axis between an idle position and a wide open throttle position, a throttle plate fixed on the throttle shaft, a driven lever pivotable about the throttle shaft axis between various angles relative to the throttle plate, and means for fixing the driven lever at a selected angle relative to the throttle plate an adjustment lever fixedly connected to the throttle adjacent the driven lever, and means for releasably securing the driven lever to the adjustment lever.

  2. Political violence and child adjustment in Northern Ireland: Testing pathways in a social-ecological model including single-and two-parent families.

    PubMed

    Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-07-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed.

  3. Including Magnetostriction in Micromagnetic Models

    NASA Astrophysics Data System (ADS)

    Conbhuí, Pádraig Ó.; Williams, Wyn; Fabian, Karl; Nagy, Lesleis

    2016-04-01

    The magnetic anomalies that identify crustal spreading are predominantly recorded by basalts formed at the mid-ocean ridges, whose magnetic signals are dominated by iron-titanium-oxides (Fe3-xTixO4), so called "titanomagnetites", of which the Fe2.4Ti0.6O4 (TM60) phase is the most common. With sufficient quantities of titanium present, these minerals exhibit strong magnetostriction. To date, models of these grains in the pseudo-single domain (PSD) range have failed to accurately account for this effect. In particular, a popular analytic treatment provided by Kittel (1949) for describing the magnetostrictive energy as an effective increase of the anisotropy constant can produce unphysical strains for non-uniform magnetizations. I will present a rigorous approach based on work by Brown (1966) and by Kroner (1958) for including magnetostriction in micromagnetic codes which is suitable for modelling hysteresis loops and finding remanent states in the PSD regime. Preliminary results suggest the more rigorously defined micromagnetic models exhibit higher coercivities and extended single domain ranges when compared to more simplistic approaches.

  4. 76 FR 32815 - Medicaid Program; Payment Adjustment for Provider-Preventable Conditions Including Health Care...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-06

    ... Conditions Including Health Care-Acquired Conditions; Final Rule #0;#0;Federal Register / Vol. 76 , No. 108... Adjustment for Provider-Preventable Conditions Including Health Care-Acquired Conditions AGENCY: Centers for... section 2702 of the Patient Protection and Affordable Care Act which directs the Secretary of Health...

  5. 76 FR 9283 - Medicaid Program; Payment Adjustment for Provider-Preventable Conditions Including Health Care...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... Medicaid Program; Payment Adjustment for Provider-Preventable Conditions Including Health Care-Acquired... amounts expended for providing medical assistance for health care-acquired conditions. It would also... Federal financial participation FY Fiscal year HAC Hospital-acquired condition HCAC Health...

  6. Risk-Adjusted Models for Adverse Obstetric Outcomes and Variation in Risk Adjusted Outcomes Across Hospitals

    PubMed Central

    Bailit, Jennifer L.; Grobman, William A.; Rice, Madeline Murguia; Spong, Catherine Y.; Wapner, Ronald J.; Varner, Michael W.; Thorp, John M.; Leveno, Kenneth J.; Caritis, Steve N.; Shubert, Phillip J.; Tita, Alan T. N.; Saade, George; Sorokin, Yoram; Rouse, Dwight J.; Blackwell, Sean C.; Tolosa, Jorge E.; Van Dorsten, J. Peter

    2014-01-01

    Objective Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take pre-existing patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for five obstetric outcomes and assess hospital performance across these outcomes. Study Design A cohort study of 115,502 women and their neonates born in 25 hospitals in the United States between March 2008 and February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Results Venous thromboembolism occurred too infrequently (0.03%, 95% CI 0.02% – 0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage 2.29% (95% CI 2.20–2.38), peripartum infection 5.06% (95% CI 4.93–5.19), severe perineal laceration at spontaneous vaginal delivery 2.16% (95% CI 2.06–2.27), neonatal composite 2.73% (95% CI 2.63–2.84)). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Conclusions Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. PMID:23891630

  7. SEEPAGE MODEL FOR PA INCLUDING DRIFT COLLAPSE

    SciTech Connect

    C. Tsang

    2004-09-22

    The purpose of this report is to document the predictions and analyses performed using the seepage model for performance assessment (SMPA) for both the Topopah Spring middle nonlithophysal (Tptpmn) and lower lithophysal (Tptpll) lithostratigraphic units at Yucca Mountain, Nevada. Look-up tables of seepage flow rates into a drift (and their uncertainty) are generated by performing numerical simulations with the seepage model for many combinations of the three most important seepage-relevant parameters: the fracture permeability, the capillary-strength parameter 1/a, and the percolation flux. The percolation flux values chosen take into account flow focusing effects, which are evaluated based on a flow-focusing model. Moreover, multiple realizations of the underlying stochastic permeability field are conducted. Selected sensitivity studies are performed, including the effects of an alternative drift geometry representing a partially collapsed drift from an independent drift-degradation analysis (BSC 2004 [DIRS 166107]). The intended purpose of the seepage model is to provide results of drift-scale seepage rates under a series of parameters and scenarios in support of the Total System Performance Assessment for License Application (TSPA-LA). The SMPA is intended for the evaluation of drift-scale seepage rates under the full range of parameter values for three parameters found to be key (fracture permeability, the van Genuchten 1/a parameter, and percolation flux) and drift degradation shape scenarios in support of the TSPA-LA during the period of compliance for postclosure performance [Technical Work Plan for: Performance Assessment Unsaturated Zone (BSC 2002 [DIRS 160819], Section I-4-2-1)]. The flow-focusing model in the Topopah Spring welded (TSw) unit is intended to provide an estimate of flow focusing factors (FFFs) that (1) bridge the gap between the mountain-scale and drift-scale models, and (2) account for variability in local percolation flux due to

  8. Models of bovine babesiosis including juvenile cattle.

    PubMed

    Saad-Roy, C M; Shuai, Zhisheng; van den Driessche, P

    2015-03-01

    Bovine Babesiosis in cattle is caused by the transmission of protozoa of Babesia spp. by ticks as vectors. Juvenile cattle (<9 months of age) have resistance to Bovine Babesiosis, rarely show symptoms, and acquire immunity upon recovery. Susceptibility to the disease varies between breeds of cattle. Models of the dynamics of Bovine Babesiosis transmitted by the cattle tick that include these factors are formulated as systems of ordinary differential equations. Basic reproduction numbers are calculated, and it is proved that if these numbers are below the threshold value of one, then Bovine Babesiosis dies out. However, above the threshold number of one, the disease may approach an endemic state. In this case, control measures are suggested by determining target reproduction numbers. The percentage of a particular population (for example, the adult bovine population) needed to be controlled to eradicate the disease is evaluated numerically using Columbia data from the literature. PMID:25715822

  9. An interface model for dosage adjustment connects hematotoxicity to pharmacokinetics.

    PubMed

    Meille, C; Iliadis, A; Barbolosi, D; Frances, N; Freyer, G

    2008-12-01

    When modeling is required to describe pharmacokinetics and pharmacodynamics simultaneously, it is difficult to link time-concentration profiles and drug effects. When patients are under chemotherapy, despite the huge amount of blood monitoring numerations, there is a lack of exposure variables to describe hematotoxicity linked with the circulating drug blood levels. We developed an interface model that transforms circulating pharmacokinetic concentrations to adequate exposures, destined to be inputs of the pharmacodynamic process. The model is materialized by a nonlinear differential equation involving three parameters. The relevance of the interface model for dosage adjustment is illustrated by numerous simulations. In particular, the interface model is incorporated into a complex system including pharmacokinetics and neutropenia induced by docetaxel and by cisplatin. Emphasis is placed on the sensitivity of neutropenia with respect to the variations of the drug amount. This complex system including pharmacokinetic, interface, and pharmacodynamic hematotoxicity models is an interesting tool for analysis of hematotoxicity induced by anticancer agents. The model could be a new basis for further improvements aimed at incorporating new experimental features. PMID:19107581

  10. An Integrated Biochemistry Laboratory, Including Molecular Modeling

    NASA Astrophysics Data System (ADS)

    Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.

    1996-11-01

    ) experience with methods of protein purification; (iii) incorporation of appropriate controls into experiments; (iv) use of basic statistics in data analysis; (v) writing papers and grant proposals in accepted scientific style; (vi) peer review; (vii) oral presentation of results and proposals; and (viii) introduction to molecular modeling. Figure 1 illustrates the modular nature of the lab curriculum. Elements from each of the exercises can be separated and treated as stand-alone exercises, or combined into short or long projects. We have been able to offer the opportunity to use sophisticated molecular modeling in the final module through funding from an NSF-ILI grant. However, many of the benefits of the research proposal can be achieved with other computer programs, or even by literature survey alone. Figure 1.Design of project-based biochemistry laboratory. Modules (projects, or portions of projects) are indicated as boxes. Each of these can be treated independently, or used as part of a larger project. Solid lines indicate some suggested paths from one module to the next. The skills and knowledge required for protein purification and design are developed in three units: (i) an introduction to critical assays needed to monitor degree of purification, including an evaluation of assay parameters; (ii) partial purification by ion-exchange techniques; and (iii) preparation of a grant proposal on protein design by mutagenesis. Brief descriptions of each of these units follow, with experimental details of each project at the end of this paper. Assays for Lysozyme Activity and Protein Concentration (4 weeks) The assays mastered during the first unit are a necessary tool for determining the purity of the enzyme during the second unit on purification by ion exchange. These assays allow an introduction to the concept of specific activity (units of enzyme activity per milligram of total protein) as a measure of purity. In this first sequence, students learn a turbidimetric assay

  11. An Integrated Biochemistry Laboratory, Including Molecular Modeling

    NASA Astrophysics Data System (ADS)

    Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.

    1996-11-01

    ) experience with methods of protein purification; (iii) incorporation of appropriate controls into experiments; (iv) use of basic statistics in data analysis; (v) writing papers and grant proposals in accepted scientific style; (vi) peer review; (vii) oral presentation of results and proposals; and (viii) introduction to molecular modeling. Figure 1 illustrates the modular nature of the lab curriculum. Elements from each of the exercises can be separated and treated as stand-alone exercises, or combined into short or long projects. We have been able to offer the opportunity to use sophisticated molecular modeling in the final module through funding from an NSF-ILI grant. However, many of the benefits of the research proposal can be achieved with other computer programs, or even by literature survey alone. Figure 1.Design of project-based biochemistry laboratory. Modules (projects, or portions of projects) are indicated as boxes. Each of these can be treated independently, or used as part of a larger project. Solid lines indicate some suggested paths from one module to the next. The skills and knowledge required for protein purification and design are developed in three units: (i) an introduction to critical assays needed to monitor degree of purification, including an evaluation of assay parameters; (ii) partial purification by ion-exchange techniques; and (iii) preparation of a grant proposal on protein design by mutagenesis. Brief descriptions of each of these units follow, with experimental details of each project at the end of this paper. Assays for Lysozyme Activity and Protein Concentration (4 weeks) The assays mastered during the first unit are a necessary tool for determining the purity of the enzyme during the second unit on purification by ion exchange. These assays allow an introduction to the concept of specific activity (units of enzyme activity per milligram of total protein) as a measure of purity. In this first sequence, students learn a turbidimetric assay

  12. Seepage Model for PA Including Dift Collapse

    SciTech Connect

    G. Li; C. Tsang

    2000-12-20

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to

  13. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  14. Including eddies in global ocean models

    NASA Astrophysics Data System (ADS)

    Semtner, Albert J.; Chervin, Robert M.

    The ocean is a turbulent fluid that is driven by winds and by surface exchanges of heat and moisture. It is as important as the atmosphere in governing climate through heat distribution, but so little is known about the ocean that it remains a “final frontier” on the face of the Earth. Many ocean currents are truly global in extent, such as the Antarctic Circumpolar Current and the “conveyor belt” that connects the North Atlantic and North Pacific oceans by flows around the southern tips of Africa and South America. It has long been a dream of some oceanographers to supplement the very limited observational knowledge by reconstructing the currents of the world ocean from the first principles of physics on a computer. However, until very recently, the prospect of doing this was thwarted by the fact that fluctuating currents known as “mesoscale eddies” could not be explicitly included in the calculation.

  15. Towards accurate observation and modelling of Antarctic glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    King, M.

    2012-04-01

    The response of the solid Earth to glacial mass changes, known as glacial isostatic adjustment (GIA), has received renewed attention in the recent decade thanks to the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE measures Earth's gravity field every 30 days, but cannot partition surface mass changes, such as present-day cryospheric or hydrological change, from changes within the solid Earth, notably due to GIA. If GIA cannot be accurately modelled in a particular region the accuracy of GRACE estimates of ice mass balance for that region is compromised. This lecture will focus on Antarctica, where models of GIA are hugely uncertain due to weak constraints on ice loading history and Earth structure. Over the last years, however, there has been a step-change in our ability to measure GIA uplift with the Global Positioning System (GPS), including widespread deployments of permanent GPS receivers as part of the International Polar Year (IPY) POLENET project. I will particularly focus on the Antarctic GPS velocity field and the confounding effect of elastic rebound due to present-day ice mass changes, and then describe the construction and calibration of a new Antarctic GIA model for application to GRACE data, as well as highlighting areas where further critical developments are required.

  16. Positive Psychology in the Personal Adjustment Course: A Salutogenic Model.

    ERIC Educational Resources Information Center

    Hymel, Glenn M.; Etherton, Joseph L.

    This paper proposes embedding various positive psychology themes in the context of an undergraduate course on the psychology of personal adjustment. The specific positive psychology constructs considered include those of hope, optimism, perseverance, humility, forgiveness, and spirituality. These themes are related to appropriate course content…

  17. Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.

    ERIC Educational Resources Information Center

    Parker, Randall M.; Schaller, James; Hansmann, Sandra

    2003-01-01

    Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…

  18. Order Effects in Belief Updating: The Belief-Adjustment Model.

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Einhorn, Hillel J.

    1992-01-01

    A theory of the updating of beliefs over time is presented that explicitly accounts for order-effect phenomena as arising from the interaction of information-processing strategies and task characteristics. The belief-adjustment model is supported by 5 experiments involving 192 adult subjects. (SLD)

  19. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    PubMed

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  20. Adjustment in Mothers of Children with Asperger Syndrome: An Application of the Double ABCX Model of Family Adjustment

    ERIC Educational Resources Information Center

    Pakenham, Kenneth I.; Samios, Christina; Sofronoff, Kate

    2005-01-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between…

  1. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    USGS Publications Warehouse

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  2. On the Hydrologic Adjustment of Climate-Model Projections: The Potential Pitfall of Potential Evapotranspiration

    USGS Publications Warehouse

    Milly, Paul C.D.; Dunne, Krista A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.

  3. A reassessment of the PRIMO recommendations for adjustments to mid-latitude ionospheric models

    NASA Astrophysics Data System (ADS)

    David, M.; Sojka, J. J.; Schunk, R. W.

    2012-12-01

    In the late 1990s, in response to the realization that ionospheric physical models tended to underestimate the dayside peak F-region electron density (NmF2) by about a factor of 2, a group of modelers convened to find out why. The project was dubbed PRIMO, standing for Problems Relating to Ionospheric Models and Observations. Five ionospheric models were employed in the original study, including the Utah State University Time Dependent Ionospheric Model (TDIM), which is the focus of the present study. No physics-based explanation was put forward for the models' shortcomings, but there was a recommendation that three adjustments be made within the models: 1) The inclusion of a Burnside factor of 1.7 for the diffusion coefficients; 2) that the branching ratio of O+ be changed from 0.38 to 0.25; and 3) that the dayside ion production rates be scaled upward to account for ionization by secondary photons. The PRIMO recommendations were dutifully included in our TDIM model at Utah State University, though as time went on, and particularly while modeling the ionosphere during the International Polar Year (2007), it became clear that the PRIMO adjustments sometimes caused the model to produce excessively high dayside electron densities. As the original PRIMO study [Anderson et al, 1998] was based upon model/observation comparison over a very limited set of observations from just one station (Millstone Hill, Massachusetts), we have expanded the range of the study, taking advantage of resources that were not available 12 years ago, most notably the NGDC SPIDR Internet data base, and faster computers for running large numbers of simulations with the TDIM model. We look at ionosonde measurements of the peak dayside electron densities at mid-latitudes around the world, across the full range of seasons and solar cycles, as well as levels of geomagnetic activity, in order to determine at which times the PRIMO adjustments should be included in the model, and when it is best not to

  4. Risk-adjusted outcome models for public mental health outpatient programs.

    PubMed Central

    Hendryx, M S; Dyck, D G; Srebnik, D

    1999-01-01

    OBJECTIVE: To develop and test risk-adjustment outcome models in publicly funded mental health outpatient settings. We developed prospective risk models that used demographic and diagnostic variables; client-reported functioning, satisfaction, and quality of life; and case manager clinical ratings to predict subsequent client functional status, health-related quality of life, and satisfaction with services. DATA SOURCES/STUDY SETTING: Data collected from 289 adult clients at five- and ten-month intervals, from six community mental health agencies in Washington state located primarily in suburban and rural areas. Data sources included client self-report, case manager ratings, and management information system data. STUDY DESIGN: Model specifications were tested using prospective linear regression analyses. Models were validated in a separate sample and comparative agency performance examined. PRINCIPAL FINDINGS: Presence of severe diagnoses, substance abuse, client age, and baseline functional status and quality of life were predictive of mental health outcomes. Unadjusted versus risk-adjusted scores resulted in differently ranked agency performance. CONCLUSIONS: Risk-adjusted functional status and patient satisfaction outcome models can be developed for public mental health outpatient programs. Research is needed to improve the predictive accuracy of the outcome models developed in this study, and to develop techniques for use in applied settings. The finding that risk adjustment changes comparative agency performance has important consequences for quality monitoring and improvement. Issues in public mental health risk adjustment are discussed, including static versus dynamic risk models, utilization versus outcome models, choice and timing of measures, and access and quality improvement incentives. PMID:10201857

  5. The HHS-HCC risk adjustment model for individual and small group markets under the Affordable Care Act.

    PubMed

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula.

  6. The HHS-HCC risk adjustment model for individual and small group markets under the Affordable Care Act.

    PubMed

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387

  7. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  8. Mispricing in the medicare advantage risk adjustment model.

    PubMed

    Chen, Jing; Ellis, Randall P; Toro, Katherine H; Ash, Arlene S

    2015-01-01

    The Centers for Medicare and Medicaid Services (CMS) implemented hierarchical condition category (HCC) models in 2004 to adjust payments to Medicare Advantage (MA) plans to reflect enrollees' expected health care costs. We use Verisk Health's diagnostic cost group (DxCG) Medicare models, refined "descendants" of the same HCC framework with 189 comprehensive clinical categories available to CMS in 2004, to reveal 2 mispricing errors resulting from CMS' implementation. One comes from ignoring all diagnostic information for "new enrollees" (those with less than 12 months of prior claims). Another comes from continuing to use the simplified models that were originally adopted in response to assertions from some capitated health plans that submitting the claims-like data that facilitate richer models was too burdensome. Even the main CMS model being used in 2014 recognizes only 79 condition categories, excluding many diagnoses and merging conditions with somewhat heterogeneous costs. Omitted conditions are typically lower cost or "vague" and not easily audited from simplified data submissions. In contrast, DxCG Medicare models use a comprehensive, 394-HCC classification system. Applying both models to Medicare's 2010-2011 fee-for-service 5% sample, we find mispricing and lower predictive accuracy for the CMS implementation. For example, in 2010, 13% of beneficiaries had at least 1 higher cost DxCG-recognized condition but no CMS-recognized condition; their 2011 actual costs averaged US$6628, almost one-third more than the CMS model prediction. As MA plans must now supply encounter data, CMS should consider using more refined and comprehensive (DxCG-like) models.

  9. Dynamic hysteresis modeling including skin effect using diffusion equation model

    NASA Astrophysics Data System (ADS)

    Hamada, Souad; Louai, Fatima Zohra; Nait-Said, Nasreddine; Benabou, Abdelkader

    2016-07-01

    An improved dynamic hysteresis model is proposed for the prediction of hysteresis loop of electrical steel up to mean frequencies, taking into account the skin effect. In previous works, the analytical solution of the diffusion equation for low frequency (DELF) was coupled with the inverse static Jiles-Atherton (JA) model in order to represent the hysteresis behavior for a lamination. In the present paper, this approach is improved to ensure the reproducibility of measured hysteresis loops at mean frequency. The results of simulation are compared with the experimental ones. The selected results for frequencies 50 Hz, 100 Hz, 200 Hz and 400 Hz are presented and discussed.

  10. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    ERIC Educational Resources Information Center

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  11. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  12. Adjusting the Census of 1990: The Smoothing Model.

    ERIC Educational Resources Information Center

    Freedman, David A.; And Others

    1993-01-01

    Techniques for adjusting census figures are discussed, with a focus on sampling error, uncertainty of estimates resulting from the luck of sample choice. Computer simulations illustrate the ways in which the smoothing algorithm may make adjustments less, rather than more, accurate. (SLD)

  13. Modeling heart rate variability including the effect of sleep stages

    NASA Astrophysics Data System (ADS)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  14. Modeling heart rate variability including the effect of sleep stages.

    PubMed

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that-in comparison with real data-the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed. PMID:26931582

  15. A sonic boom propagation model including mean flow atmospheric effects

    NASA Astrophysics Data System (ADS)

    Salamone, Joe; Sparrow, Victor W.

    2012-09-01

    This paper presents a time domain formulation of nonlinear lossy propagation in onedimension that also includes the effects of non-collinear mean flow in the acoustic medium. The model equation utilized is an augmented Burgers equation that includes the effects of nonlinearity, geometric spreading, atmospheric stratification, and also absorption and dispersion due to thermoviscous and molecular relaxation effects. All elements of the propagation are implemented in the time domain and the effects of non-collinear mean flow are accounted for in each term of the model equation. Previous authors have presented methods limited to showing the effects of wind on ray tracing and/or using an effective speed of sound in their model equation. The present work includes the effects of mean flow for all terms included in the augmented Burgers equation with all of the calculations performed in the time-domain. The capability to include the effects of mean flow in the acoustic medium allows one to make predictions more representative of real-world atmospheric conditions. Examples are presented for nonlinear propagation of N-waves and shaped sonic booms. [Work supported by Gulfstream Aerospace Corporation.

  16. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025

  17. Development of a charge adjustment model for cardiac catheterization.

    PubMed

    Brennan, Andrew; Gauvreau, Kimberlee; Connor, Jean; O'Connell, Cheryl; David, Sthuthi; Almodovar, Melvin; DiNardo, James; Banka, Puja; Mayer, John E; Marshall, Audrey C; Bergersen, Lisa

    2015-02-01

    A methodology that would allow for comparison of charges across institutions has not been developed for catheterization in congenital heart disease. A single institution catheterization database with prospectively collected case characteristics was linked to hospital charges related and limited to an episode of care in the catheterization laboratory for fiscal years 2008-2010. Catheterization charge categories (CCC) were developed to group types of catheterization procedures using a combination of empiric data and expert consensus. A multivariable model with outcome charges was created using CCC and additional patient and procedural characteristics. In 3 fiscal years, 3,839 cases were available for analysis. Forty catheterization procedure types were categorized into 7 CCC yielding a grouper variable with an R (2) explanatory value of 72.6%. In the final CCC, the largest proportion of cases was in CCC 2 (34%), which included diagnostic cases without intervention. Biopsy cases were isolated in CCC 1 (12%), and percutaneous pulmonary valve placement alone made up CCC 7 (2%). The final model included CCC, number of interventions, and cardiac diagnosis (R (2) = 74.2%). Additionally, current financial metrics such as APR-DRG severity of illness and case mix index demonstrated a lack of correlation with CCC. We have developed a catheterization procedure type financial grouper that accounts for the diverse case population encountered in catheterization for congenital heart disease. CCC and our multivariable model could be used to understand financial characteristics of a population at a single point in time, longitudinally, and to compare populations.

  18. Development of a charge adjustment model for cardiac catheterization.

    PubMed

    Brennan, Andrew; Gauvreau, Kimberlee; Connor, Jean; O'Connell, Cheryl; David, Sthuthi; Almodovar, Melvin; DiNardo, James; Banka, Puja; Mayer, John E; Marshall, Audrey C; Bergersen, Lisa

    2015-02-01

    A methodology that would allow for comparison of charges across institutions has not been developed for catheterization in congenital heart disease. A single institution catheterization database with prospectively collected case characteristics was linked to hospital charges related and limited to an episode of care in the catheterization laboratory for fiscal years 2008-2010. Catheterization charge categories (CCC) were developed to group types of catheterization procedures using a combination of empiric data and expert consensus. A multivariable model with outcome charges was created using CCC and additional patient and procedural characteristics. In 3 fiscal years, 3,839 cases were available for analysis. Forty catheterization procedure types were categorized into 7 CCC yielding a grouper variable with an R (2) explanatory value of 72.6%. In the final CCC, the largest proportion of cases was in CCC 2 (34%), which included diagnostic cases without intervention. Biopsy cases were isolated in CCC 1 (12%), and percutaneous pulmonary valve placement alone made up CCC 7 (2%). The final model included CCC, number of interventions, and cardiac diagnosis (R (2) = 74.2%). Additionally, current financial metrics such as APR-DRG severity of illness and case mix index demonstrated a lack of correlation with CCC. We have developed a catheterization procedure type financial grouper that accounts for the diverse case population encountered in catheterization for congenital heart disease. CCC and our multivariable model could be used to understand financial characteristics of a population at a single point in time, longitudinally, and to compare populations. PMID:25113520

  19. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  20. A Mathematical Learning Model Including Interactions among Different Learnings

    NASA Astrophysics Data System (ADS)

    Nariyuki, Yasuhiro; Yamaguchi, Norikazu

    2015-03-01

    The mathematical learning model reported by Nitta [Phys. Rev. ST Phys. Educ. Res. 6, 020105 (2010)], which describes the transition from pre test score (fraction of the correct answer) to the post score, is extended to include interactions among different learnings. Numerical solutions of the model suggest that the effects of loss due to the different learnings possibly conceal interactive learnings from observational data.

  1. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  2. Rotorcraft Transmission Noise Path Model, Including Distributed Fluid Film Bearing Impedance Modeling

    NASA Technical Reports Server (NTRS)

    Hambric, Stephen A.; Hanford, Amanda D.; Shepherd, Micah R.; Campbell, Robert L.; Smith, Edward C.

    2010-01-01

    A computational approach for simulating the effects of rolling element and journal bearings on the vibration and sound transmission through gearboxes has been demonstrated. The approach, using ARL/Penn State s CHAMP methodology, uses Component Mode Synthesis of housing and shafting modes computed using Finite Element (FE) models to allow for rapid adjustment of bearing impedances in gearbox models. The approach has been demonstrated on NASA GRC s test gearbox with three different bearing configurations: in the first condition, traditional rolling element (ball and roller) bearings were installed, and in the second and third conditions, the traditional bearings were replaced with journal and wave bearings (wave bearings are journal bearings with a multi-lobed wave pattern on the bearing surface). A methodology for computing the stiffnesses and damping in journal and wave bearings has been presented, and demonstrated for the journal and wave bearings used in the NASA GRC test gearbox. The FE model of the gearbox, along with the rolling element bearing coupling impedances, was analyzed to compute dynamic transfer functions between forces applied to the meshing gears and accelerations on the gearbox housing, including several locations near the bearings. A Boundary Element (BE) acoustic model was used to compute the sound radiated by the gearbox. Measurements of the Gear Mesh Frequency (GMF) tones were made by NASA GRC at several operational speeds for the rolling element and journal bearing gearbox configurations. Both the measurements and the CHAMP numerical model indicate that the journal bearings reduce vibration and noise for the second harmonic of the gear meshing tones, but show no clear benefit to using journal bearings to reduce the amplitudes of the fundamental gear meshing tones. Also, the numerical model shows that the gearbox vibrations and radiated sound are similar for journal and wave bearing configurations.

  3. Cont-Bouchaud Percolation Model Including Tobin Tax

    NASA Astrophysics Data System (ADS)

    Ehrenstein, Gudrun

    The Tobin tax is an often discussed method to tame speculation and get a source of income. The discussion is especially heated when the financial markets are in crisis. In this article we refer to the foreign exchange markets. The Tobin tax should be a small international tax affecting all currency transactions and thus consequently reducing destabilizing speculations. In this way this tax should take over a control function. By including the Tobin tax in the microscopic model of Cont and Bouchaud one finds that this tax could be the right method to control foreign exchange operations and to get a good source of income.

  4. Multistage carcinogenesis modeling including cell cycle and DNA damage states

    NASA Astrophysics Data System (ADS)

    Hazelton, W.; Moolgavkar, S.

    The multistage clonal expansion model of carcinogenesis is generalized to include cell cycle states and corresponding DNA damage states with imperfect repair for normal and initiated stem cells. Initiated cells may undergo transformation to a malignant state, eventually leading to cancer incidence or death. The model allows oxidative or radiation induced DNA damage, checkpoint delay, DNA repair, apoptosis, and transformation rates to depend on the cell cycle state or DNA damage state of normal and initiated cells. A probability generating function approach is used to represent the time dependent probability distribution for cells in all states. The continuous time coupled Markov system representing this joint distribution satisfies a partial differential equation (pde). Time dependent survival and hazard functions are found through numerical solution of the characteristic equations for the pde. Although the hazard and survival can be calculated numerically, number and size distributions of pre-malignant lesions from models that are developed will be approximated through simulation. We use the model to explore predictions for hazard and survival as parameters representing cell cycle regulation and arrest are modified. Modification of these parameters may influence rates for cell division, apoptosis and malignant transformation that are important in carcinogenesis. We also explore enhanced repair that may be important for low-dose hypersensitivity and adaptive response, and degradation of repair processes or loss of checkpoint control that may drive genetic instability.

  5. Modeling of Radio Emission from Saturn's Rings Including Wakes

    NASA Astrophysics Data System (ADS)

    Molnar, L. A.; Dunn, D. E.; Cully, J. C.; Young, D. J.

    2000-10-01

    We have extended the ``simrings" radiative transfer software package (Dunn, Molnar, and Fix 1999) to include idealized ring wakes. The package consists four principle, modular components: ``simprob," which computes Mie scattering functions for individual particles specified by size and composition; ``simrings," which uses a Monte Carlo simulation to compute the complete scattering function and thermal emission of a ring slab specified by particle size distribution and density (including the possibility of wake density enhancements); ``simplot," which uses these functions along with geometric information and a full description of the planet brightness to compute the ring brightness as a function of azimuth as viewed from Earth; and "simcoord", which combines this information for a series of rings to make a final model of the radio emission as viewed on the sky. We compare sample results from this package with those of a simple, analytic model that ignores multiple scattering. This allows us to show qualitatively under what conditions one might observe east-west asymmetry in the rings caused by multiple scattering off wakes (as we earlier suggested may be the case: Dunn, Molnar, and Fix 1996), and to quantitatively compare models with data maps. The principle advantage of our idealized wakes is the relative ease with which we can consider a wide range of parameter space. The utility of this depends on these wakes having net scattering properties resembling those of more realistic wakes. We compare our idealized wakes with the gravitational simulations of Daisaka and Ida (1999) and find that this is the case for directly transmitted flux as a function of azimuth and inclination. As complete scattering properties of realistic simulations become available, we can use them as alternative inputs to ``simplot," producing model radio maps for them. Finally, we compare preliminary runs of the ``simrings" package with radio data spanning a range of observing wavelengths and

  6. A Prediction Model for Chronic Kidney Disease Includes Periodontal Disease

    PubMed Central

    Fisher, Monica A.; Taylor, George W.

    2009-01-01

    Background An estimated 75% of the seven million Americans with moderate-to-severe chronic kidney disease are undiagnosed. Improved prediction models to identify high-risk subgroups for chronic kidney disease enhance the ability of health care providers to prevent or delay serious sequelae, including kidney failure, cardiovascular disease, and premature death. Methods We identified 11,955 adults ≥18 years of age in the Third National Health and Nutrition Examination Survey. Chronic kidney disease was defined as an estimated glomerular filtration rate of 15 to 59 ml/minute/1.73 m2. High-risk subgroups for chronic kidney disease were identified by estimating the individual probability using β coefficients from the model of traditional and non-traditional risk factors. To evaluate this model, we performed standard diagnostic analyses of sensitivity, specificity, positive predictive value, and negative predictive value using 5%, 10%, 15%, and 20% probability cutoff points. Results The estimated probability of chronic kidney disease ranged from virtually no probability (0%) for an individual with none of the 12 risk factors to very high probability (98%) for an older, non-Hispanic white edentulous former smoker, with diabetes ≥10 years, hypertension, macroalbuminuria, high cholesterol, low high-density lipoprotein, high C-reactive protein, lower income, and who was hospitalized in the past year. Evaluation of this model using an estimated 5% probability cutoff point resulted in 86% sensitivity, 85% specificity, 18% positive predictive value, and 99% negative predictive value. Conclusion This United States population–based study suggested the importance of considering multiple risk factors, including periodontal status, because this improves the identification of individuals at high risk for chronic kidney disease and may ultimately reduce its burden. PMID:19228085

  7. Kinetic models of gene expression including non-coding RNAs

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2011-03-01

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  8. Development of an Aeroelastic Analysis Including a Viscous Flow Model

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Bakhle, Milind A.

    2001-01-01

    Under this grant, Version 4 of the three-dimensional Navier-Stokes aeroelastic code (TURBO-AE) has been developed and verified. The TURBO-AE Version 4 aeroelastic code allows flutter calculations for a fan, compressor, or turbine blade row. This code models a vibrating three-dimensional bladed disk configuration and the associated unsteady flow (including shocks, and viscous effects) to calculate the aeroelastic instability using a work-per-cycle approach. Phase-lagged (time-shift) periodic boundary conditions are used to model the phase lag between adjacent vibrating blades. The direct-store approach is used for this purpose to reduce the computational domain to a single interblade passage. A disk storage option, implemented using direct access files, is available to reduce the large memory requirements of the direct-store approach. Other researchers have implemented 3D inlet/exit boundary conditions based on eigen-analysis. Appendix A: Aeroelastic calculations based on three-dimensional euler analysis. Appendix B: Unsteady aerodynamic modeling of blade vibration using the turbo-V3.1 code.

  9. Progress Towards an LES Wall Model Including Unresolved Roughness

    NASA Astrophysics Data System (ADS)

    Craft, Kyle; Redman, Andrew; Aikens, Kurt

    2015-11-01

    Wall models used in large eddy simulations (LES) are often based on theories for hydraulically smooth walls. While this is reasonable for many applications, there are also many where the impact of surface roughness is important. A previously developed wall model has been used primarily for jet engine aeroacoustics. However, jet simulations have not accurately captured thick initial shear layers found in some experimental data. This may partly be due to nozzle wall roughness used in the experiments to promote turbulent boundary layers. As a result, the wall model is extended to include the effects of unresolved wall roughness through appropriate alterations to the log-law. The methodology is tested for incompressible flat plate boundary layers with different surface roughness. Correct trends are noted for the impact of surface roughness on the velocity profile. However, velocity deficit profiles and the Reynolds stresses do not collapse as well as expected. Possible reasons for the discrepancies as well as future work will be presented. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  10. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    PubMed

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting. PMID:9306648

  11. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    PubMed

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  12. Polarimetric Models of Circumstellar Discs Including Aggregate Dust Grains

    NASA Astrophysics Data System (ADS)

    Mohan, Mahesh

    output files and to apply a size distribution to the data. The second circumstellar disc investigated is the debris disc of the M dwarf star AU Mic. The disc was modelled, using the radiative transfer code Hyperion, based on F606W (HST) and JHK0-band (Keck II) scattered light observations and F606Wband polarized light observations. Initially, the disc is modelled as a two component structure using two grain types: compact silicate grains and porous dirty ice water. Both models are able to reproduce the observed SED and the F606W and H-band surface brightness profiles, but are unable to fit the observed F606W degree of polarization. Therefore, a more complex/realistic grain model was examined (ballistic aggregate particles). In addition, recent millimetre observations suggest the existence of a planetesimal belt < 3 AU from the central star. This belt is included in the BAM2 model and was successful in fitting the observed SED, F606W and H-band surface brightness and F606W polarization. These results demonstrate the limitations of spherical grain models and indicate the importance of modelling more realistic dust grains.

  13. A model for including thermal conduction in molecular dynamics simulations

    NASA Technical Reports Server (NTRS)

    Wu, Yue; Friauf, Robert J.

    1989-01-01

    A technique is introduced for including thermal conduction in molecular dynamics simulations for solids. A model is developed to allow energy flow between the computational cell and the bulk of the solid when periodic boundary conditions cannot be used. Thermal conduction is achieved by scaling the velocities of atoms in a transitional boundary layer. The scaling factor is obtained from the thermal diffusivity, and the results show good agreement with the solution for a continuous medium at long times. The effects of different temperature and size of the system, and of variations in strength parameter, atomic mass, and thermal diffusivity were investigated. In all cases, no significant change in simulation results has been found.

  14. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    ERIC Educational Resources Information Center

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  15. Analytical Jacobian Calculation in RT Model Including Polarization Effect

    NASA Astrophysics Data System (ADS)

    Okabayashi, Y.; Yoshida, Y.; Ota, Y.

    2014-12-01

    The greenhouse gas observing satellite "GOSAT" launched in January 2009 has been observing global distribution of CO2 and CH4. The TANSO-FTS mounted on GOSAT measures the two polarized components (called "P" and "S") of short wavelength infrared (SWIR) spectrum reflected from the earth's surface. In NIES, column-averaged dry air mole fraction of CO2 and CH4 (XCO2 and XCH4) are retrieved from SWIR spectra. However, the observed polarization information is not effectively utilized in the retrieval process due to the large computational cost of a vector RT model, instead the polarization synthesized spectra and a scalar RT model are used in the operational processing. An optical path length modification due to aerosol scattering is known as the major error source for XCO2 and XCH4 retrieval from SWIR spectra. Because the aerosol scattering changes polarization state of light, more accurate or additional aerosol information is expected by using the observed polarization spectra effectively in the retrieval process, which improves the retrieval accuracy of XCO2 and XCH4. In addition, for information content analysis, sensitivity analysis and error analysis, Jacobian matrix is important onto retrieval algorithm design before analyses for actual observed data. However, in the case of using RT model including polarization effect in retrieval process, the computational cost of Jacobian matrix calculations in maximum a posteriori retrieval is significantly large. Efficient calculation of analytical Jacobian is necessary. As a first step, we are implementing an analytical Jacobian calculation function to the vector RT model "Pstar". RT scheme of Pstar is based on hybrid method comprising the discrete ordinate and matrix operator methods. The reflection/transmission matrices and source vectors are obtained for each vertical layer through the discrete ordinate solution, and the vertically inhomogeneous system is constructed using the matrix operator method. Because the delta

  16. Goldilocks models of higher-dimensional inflation (including modulus stabilization)

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Enns, Jared J. H.; Hayman, Peter; Patil, Subodh P.

    2016-08-01

    We explore the mechanics of inflation within simplified extra-dimensional models involving an inflaton interacting with the Einstein-Maxwell system in two extra dimensions. The models are Goldilocks-like inasmuch as they are just complicated enough to include a mechanism to stabilize the extra-dimensional size (or modulus), yet simple enough to solve explicitly the full extra-dimensional field equations using only simple tools. The solutions are not restricted to the effective 4D regime with H ll mKK (the latter referring to the characteristic mass splitting of the Kaluza-Klein excitations) because the full extra-dimensional Einstein equations are solved. This allows an exploration of inflationary physics in a controlled calculational regime away from the usual four-dimensional lamp-post. The inclusion of modulus stabilization is important because experience with string models teaches that this is usually what makes models fail: stabilization energies easily dominate the shallow potentials required by slow roll and so open up directions to evolve that are steeper than those of the putative inflationary direction. We explore (numerically and analytically) three representative kinds of inflationary scenarios within this simple setup. In one the radion is trapped in an inflaton-dependent local minimum whose non-zero energy drives inflation. Inflation ends as this energy relaxes to zero when the inflaton finds its own minimum. The other two involve power-law scaling solutions during inflation. One of these is a dynamical attractor whose features are relatively insensitive to initial conditions but whose slow-roll parameters cannot be arbitrarily small; the other is not an attractor but can roll much more slowly, until eventually transitioning to the attractor. The scaling solutions can satisfy H > mKK, but when they do standard 4D fluctuation calculations need not apply. When in a 4D regime the solutions predict η simeq 0 and so r simeq 0.11 when ns simeq 0.96 and so

  17. Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental.

    PubMed

    Lidauer, M H; Emmerling, R; Mäntysaari, E A

    2008-06-01

    A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.

  18. Energy loss in a partonic transport model including bremsstrahlung processes

    SciTech Connect

    Fochler, Oliver; Greiner, Carsten; Xu Zhe

    2010-08-15

    A detailed investigation of the energy loss of gluons that traverse a thermal gluonic medium simulated within the perturbative QCD-based transport model BAMPS (a Boltzmann approach to multiparton scatterings) is presented in the first part of this work. For simplicity the medium response is neglected in these calculations. The energy loss from purely elastic interactions is compared with the case where radiative processes are consistently included based on the matrix element by Gunion and Bertsch. From this comparison, gluon multiplication processes gg{yields}ggg are found to be the dominant source of energy loss within the approach employed here. The consequences for the quenching of gluons with high transverse momentum in fully dynamic simulations of Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energy of {radical}(s)=200A GeV are discussed in the second major part of this work. The results for central collisions as discussed in a previous publication are revisited, and first results on the nuclear modification factor R{sub AA} for noncentral Au+Au collisions are presented. They show a decreased quenching compared to central collisions while retaining the same shape. The investigation of the elliptic flow v{sub 2} is extended up to nonthermal transverse momenta of 10 GeV, exhibiting a maximum v{sub 2} at roughly 4 to 5 GeV and a subsequent decrease. Finally the sensitivity of the aforementioned results on the specific implementation of the effective modeling of the Landau-Pomeranchuk-Migdal (LPM) effect via a formation-time-based cutoff is explored.

  19. A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model

    EPA Science Inventory

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.

  20. Modeling fluvial incision and transient landscape evolution: Influence of dynamic channel adjustment

    NASA Astrophysics Data System (ADS)

    Attal, M.; Tucker, G. E.; Whittaker, A. C.; Cowie, P. A.; Roberts, G. P.

    2008-09-01

    Channel geometry exerts a fundamental control on fluvial processes. Recent work has shown that bedrock channel width depends on a number of parameters, including channel slope, and is not solely a function of drainage area as is commonly assumed. The present work represents the first attempt to investigate the consequences of dynamic, gradient-sensitive channel adjustment for drainage-basin evolution. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to analyze the response of a catchment to a given tectonic perturbation, using, as a template, the topography of a well-documented catchment in the footwall of an active normal fault in the Apennines (Italy) that is known to be undergoing a transient response to tectonic forcing. We show that the observed transient response can be reproduced to first order with a simple detachment-limited fluvial incision law. Transient landscape is characterized by gentler gradients and a shorter response time when dynamic channel adjustment is allowed. The differences in predicted channel geometry between the static case (width dependent solely on upstream area) and dynamic case (width dependent on both drainage area and channel slope) lead to contrasting landscape morphologies when integrated at the scale of a whole catchment, particularly in presence of strong tilting and/or pronounced slip-rate acceleration. Our results emphasize the importance of channel width in controlling fluvial processes and landscape evolution. They stress the need for using a dynamic hydraulic scaling law when modeling landscape evolution, particularly when the relative uplift field is nonuniform.

  1. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, Anne B.; Lizarraga, Joy S.

    1996-01-01

    Statistical operations termed model-adjustment procedures can be used to incorporate local data into existing regression modes to improve the predication of urban-runoff quality. Each procedure is a form of regression analysis in which the local data base is used as a calibration data set; the resulting adjusted regression models can then be used to predict storm-runoff quality at unmonitored sites. Statistical tests of the calibration data set guide selection among proposed procedures.

  2. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  3. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  4. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  5. Constitutive modelling of evolving flow anisotropy including distortional hardening

    SciTech Connect

    Pietryga, Michael P.; Vladimirov, Ivaylo N.; Reese, Stefanie

    2011-05-04

    The paper presents a new constitutive model for anisotropic metal plasticity that takes into account the expansion or contraction (isotropic hardening), translation (kinematic hardening) and change of shape (distortional hardening) of the yield surface. The experimentally observed region of high curvature ('nose') on the yield surface in the loading direction and flattened shape in the reverse loading direction are modelled here by means of the concept of directional distortional hardening. The modelling of directional distortional hardening is accomplished by means of an evolving fourth-order tensor. The applicability of the model is illustrated by fitting experimental subsequent yield surfaces at finite plastic deformation. Comparisons with test data for aluminium low and high work hardening alloys display a good agreement between the simulation results and the experimental data.

  6. A numerical model including PID control of a multizone crystal growth furnace

    NASA Astrophysics Data System (ADS)

    Panzarella, Charles H.; Kassemi, Mohammad

    This paper presents a 2D axisymmetric combined conduction and radiation model of a multizone crystal growth furnace. The model is based on a programmable multizone furnace (PMZF) designed and built at NASA Lewis Research Center for growing high quality semiconductor crystals. A novel feature of this model is a control algorithm which automatically adjusts the power in any number of independently controlled heaters to establish the desired crystal temperatures in the furnace model. The control algorithm eliminates the need for numerous trial and error runs previously required to obtain the same results. The finite element code, FIDAP, used to develop the furnace model, was modified to directly incorporate the control algorithm. This algorithm, which presently uses PID control, and the associated heat transfer model are briefly discussed. Together, they have been used to predict the heater power distributions for a variety of furnace configurations and desired temperature profiles. Examples are included to demonstrate the effectiveness of the PID controlled model in establishing isothermal, Bridgman, and other complicated temperature profies in the sample. Finally, an example is given to show how the algorithm can be used to change the desired profile with time according to a prescribed temperature-time evolution.

  7. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  8. Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming

    2013-05-01

    Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.

  9. A new offline dust cycle model that includes dynamic vegetation

    NASA Astrophysics Data System (ADS)

    Shannon, Sarah; Lunt, Daniel

    2010-05-01

    Current offline dust cycle models are unable to predict variability in the extent of arid and semi-arid regions caused by the transient response of vegetation cover to the climate. As a consequence, it is not possible to test whether inter-annual variability in the dust loading is caused by vegetation changes or other processes. A new dust cycle model is presented which uses the Lund-Potsdam-Jena dynamic global vegetation model (Sitch et al., 2003) to calculate time varying dust sources. Surface emissions are calculated by simulating the processes of saltation and sandblasting (Tegen et al., 2002). Dust particles are transported as independent tracers within the TOMCAT chemical transport (Chipperfield, 2006). Dust is removed from the atmosphere by gravitational settling and sub-cloud scavenging. To improve the performance of the model, threshold values for vegetation cover, soil moisture, snow depth and threshold friction velocity, used to determine surface emissions are tuned. The effectiveness of three sub-cloud scavenging schemes are also tested. The tuning experiments are evaluated against multiple measurement datasets. The tuned model is used to investigate whether changes in vegetation cover in the Sahel can explain the four-fold increase in dust concentrations measured at Barbados during the 1980s relative to the 1960s (Prospero and Nees, 1986). Results show there was an expansion of the Sahara in 1984 relative to 1966 resulting in a doubling of emissions from the Sahel. However, this alone is not enough to account for the high dust concentrations measured at Barbados. This finding adds strength to the hypothesis that human induced soil degradation in North Africa may be responsible for the increase in high dust concentrations at Barbados during the 1980s relative to the 1960s. Chipperfield, M. P. (2006). "New version of the TOMCAT/SLIMCAT off-line chemical transport model: Intercomparison of stratospheric tracer experiments." Quarterly Journal of the Royal

  10. Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.

    2014-12-01

    This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.

  11. Cement-aggregate compatibility and structure property relationships including modelling

    SciTech Connect

    Jennings, H.M.; Xi, Y.

    1993-07-15

    The role of aggregate, and its interface with cement paste, is discussed with a view toward establishing models that relate structure to properties. Both short (nm) and long (mm) range structure must be considered. The short range structure of the interface depends not only on the physical distribution of the various phases, but also on moisture content and reactivity of aggregate. Changes that occur on drying, i.e. shrinkage, may alter the structure which, in turn, feeds back to alter further drying and shrinkage. The interaction is dynamic, even without further hydration of cement paste, and the dynamic characteristic must be considered in order to fully understand and model its contribution to properties. Microstructure and properties are two subjects which have been pursued somewhat separately. This review discusses both disciplines with a view toward finding common research goals in the future. Finally, comment is made on possible chemical reactions which may occur between aggregate and cement paste.

  12. Digital elevation model visibility including Earth's curvature and atmosphere refraction

    NASA Astrophysics Data System (ADS)

    Santossilva, Ewerton; Vieiradias, Luiz Alberto

    1990-03-01

    There are some instances in which the Earth's curvature and the atmospheric refraction, optical or electronic, are important factors when digital elevation models are used for visibility calculations. This work deals with this subject, suggesting a practical approach to solve this problem. Some examples, from real terrain data, are presented. The equipment used was an IBM-PC like computer with a SITIM graphic card.

  13. Modeling shelter-in-place including sorption on indoor surfaces

    SciTech Connect

    Chan, Wanyu R.; Price, Phillip N.; Gadgil, Ashok J.; Nazaroff, William W.; Loosmore, Gwen A.; Sugiyama, Gayle A.

    2003-11-01

    Intentional or accidental large-scale airborne toxic releases (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. As part of an emergency response plan, shelter-in-place (SIP) can be an effective response option, especially when evacuation is infeasible. Reasonably tight building envelopes provide some protection against exposure to peak concentrations when toxic release passes over an area. They also provide some protection in terms of cumulative exposure, if SIP is terminated promptly after the outdoor plume has passed. The purpose of this work is to quantify the level of protection offered by existing houses, and the importance of sorption/desorption to and from surfaces on the effectiveness of SIP. We examined a hypothetical chlorine gas release scenario simulated by the National Atmospheric Release Advisory Center (NARAC). We used a standard infiltration model to calculate the distribution of time dependent infiltration rates within each census tract. Large variation in the air tightness of dwellings makes some houses more protective than others. Considering only the median air tightness, model results showed that if sheltered indoors, the total population intake of non-sorbing toxic gas is only 50% of the outdoor level 4 hours from the start of the release. Based on a sorption/desorption model by Karlsson and Huber (1996), we calculated that the sorption process would further lower the total intake of the population by an additional 50%. The potential benefit of SIP can be considerably higher if the comparison is made in terms of health effects because of the non-linear acute effect dose-response curve of many chemical warfare agents and toxic industrial substances.

  14. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    SciTech Connect

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the

  15. A Model for Axial Magnetic Bearings Including Eddy Currents

    NASA Technical Reports Server (NTRS)

    Kucera, Ladislav; Ahrens, Markus

    1996-01-01

    This paper presents an analytical method of modelling eddy currents inside axial bearings. The problem is solved by dividing an axial bearing into elementary geometric forms, solving the Maxwell equations for these simplified geometries, defining boundary conditions and combining the geometries. The final result is an analytical solution for the flux, from which the impedance and the force of an axial bearing can be derived. Several impedance measurements have shown that the analytical solution can fit the measured data with a precision of approximately 5%.

  16. Neighboring extremal optimal control design including model mismatch errors

    SciTech Connect

    Kim, T.J.; Hull, D.G.

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  17. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    PubMed

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  18. Assessment and Indirect Adjustment for Confounding by Smoking in Cohort Studies Using Relative Hazards Models

    PubMed Central

    Richardson, David B.; Laurier, Dominique; Schubauer-Berigan, Mary K.; Tchetgen, Eric Tchetgen; Cole, Stephen R.

    2014-01-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950–2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950–2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer—a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented. PMID:25245043

  19. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    ERIC Educational Resources Information Center

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  20. A Model of Divorce Adjustment for Use in Family Service Agencies.

    ERIC Educational Resources Information Center

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  1. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    ERIC Educational Resources Information Center

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  2. Inelastic deformation and phenomenological modeling of aluminum including transient effect

    SciTech Connect

    Cho, C.W.

    1980-01-01

    A review was made of several phenomenological theories which have recently been proposed to describe the inelastic deformation of crystalline solids. Hart's deformation theory has many advantages, but there are disagreements with experimental deformation at stress levels below yield. A new inelastic deformation theory was proposed, introducing the concept of microplasticity. The new model consists of five deformation elements: a friction element representing a deformation element controlled by dislocation glide, a nonrecoverable plastic element representing the dislocation leakage rate over the strong dislocation barriers, a microplastic element representing the dislocation leakage rate over the weak barriers, a short range anelastic spring element representing the recoverable anelastic strain stored by piled-up dislocations against the weak barriers, and a long range anelastic spring element representing the recoverable strain stored by piled-up dislocations against the strong barriers. Load relaxation and tensile testing in the plastic range were used to determine the material parameters for the plastic friction elements. The short range and long range anelastic moduli and the material parameters for the kinetics of microplasticity were determined by the measurement of anelastic loops and by performing load relaxation tests in the microplastic region. Experimental results were compared with a computer simulation of the transient deformation behavior of commercial purity aluminum. An attempt was made to correlate the material parameters and the microstructure from TEM. Stability of material parameters during inelastic deformation was discussed and effect of metallurgical variables was examined experimentally. 71 figures, 5 tables.

  3. A data-driven model of present-day glacial isostatic adjustment in North America

    NASA Astrophysics Data System (ADS)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr

  4. Suggestion of a Numerical Model for the Blood Glucose Adjustment with Ingesting a Food

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naokatsu; Takai, Hiroshi

    In this study, we present a numerical model of the time dependence of blood glucose value after ingesting a meal. Two numerical models are proposed in this paper to explain a digestion mechanism and an adjustment mechanism of blood glucose in the body, respectively. It is considered that models are exhibited by using simple equations with a transfer function and a block diagram. Additionally, the time dependence of blood glucose was measured, when subjects ingested a sucrose or a starch. As a result, it is clear that the calculated result of models using a computer can be fitted very well to the measured result of the time dependence of blood glucose. Therefore, it is considered that the digestion model and the adjustment model are useful models in order to estimate a blood glucose value after ingesting meals.

  5. Testing a developmental cascade model of adolescent substance use trajectories and young adult adjustment

    PubMed Central

    LYNNE-LANDSMAN, SARAH D.; BRADSHAW, CATHERINE P.; IALONGO, NICHOLAS S.

    2013-01-01

    Developmental models highlight the impact of early risk factors on both the onset and growth of substance use, yet few studies have systematically examined the indirect effects of risk factors across several domains, and at multiple developmental time points, on trajectories of substance use and adult adjustment outcomes (e.g., educational attainment, mental health problems, criminal behavior). The current study used data from a community epidemiologically defined sample of 678 urban, primarily African American youth, followed from first grade through young adulthood (age 21) to test a developmental cascade model of substance use and young adult adjustment outcomes. Drawing upon transactional developmental theories and using growth mixture modeling procedures, we found evidence for a developmental progression from behavioral risk to adjustment problems in the peer context, culminating in a high-risk trajectory of alcohol, cigarette, and marijuana use during adolescence. Substance use trajectory membership was associated with adjustment in adulthood. These findings highlight the developmental significance of early individual and interpersonal risk factors on subsequent risk for substance use and, in turn, young adult adjustment outcomes. PMID:20883591

  6. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion.

    PubMed

    Huang, Lam O; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  7. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies.

  8. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    PubMed Central

    Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies. PMID:27630667

  9. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension. PMID:27301005

  10. A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada

    NASA Astrophysics Data System (ADS)

    Simon, K. M.; James, T. S.; Dyke, A. S.

    2015-07-01

    A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.

  11. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    PubMed

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  12. Two Models of Caregiver Strain and Bereavement Adjustment: A Comparison of Husband and Daughter Caregivers of Breast Cancer Hospice Patients

    ERIC Educational Resources Information Center

    Bernard, Lori L.; Guarnaccia, Charles A.

    2003-01-01

    Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…

  13. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  14. Including operational data in QMRA model: development and impact of model inputs.

    PubMed

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk). PMID:18957777

  15. Refining a Multidimensional Model of Community Adjustment through an Analysis of Postschool Follow-Up Data.

    ERIC Educational Resources Information Center

    Thompson, James R.; McGrew, Kevin S.; Johnson, David R.; Bruininks, Robert H.

    2000-01-01

    Survey data were collected on the life experiences and status of 388 young adults with disabilities out of school for 1 to 5 years. Results support a 7-factor model of community adjustment: personal satisfaction, employment-economic integration, community assimilation, need for support services, recreation-leisure integration, social network…

  16. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    ERIC Educational Resources Information Center

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  17. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    ERIC Educational Resources Information Center

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  18. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  19. Modeling Fluvial Incision and Transient Landscape Evolution: Influence of Dynamic Channel Adjustment

    NASA Astrophysics Data System (ADS)

    Attal, M.; Tucker, G. E.; Cowie, P. A.; Whittaker, A. C.; Roberts, G. P.

    2007-12-01

    Channel geometry exerts a fundamental control on fluvial processes. Recent work has shown that bedrock channel width (W) depends on a number of parameters, including channel slope, and is not only a function of drainage area (A) as is commonly assumed. The present work represents the first attempt to investigate the consequences, for landscape evolution, of using a static expression of channel width (W ~ A0.5) versus a relationship that allows channels to dynamically adjust to changes in slope. We consider different models for the evolution of the channel geometry, including constant width-to-depth ratio (after Finnegan et al., Geology, v. 33, no. 3, 2005), and width-to-depth ratio varying as a function of slope (after Whittaker et al., Geology, v. 35, no. 2, 2007). We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to analyze the response of a catchment to a given tectonic disturbance. The topography of a catchment in the footwall of an active normal fault in the Apennines (Italy) is used as a template for the study. We show that, for this catchment, the transient response can be fairly well reproduced using a simple detachment-limited fluvial incision law. We also show that, depending on the relationship used to express channel width, initial steady-state topographies differ, as do transient channel width, slope, and the response time of the fluvial system. These differences lead to contrasting landscape morphologies when integrated at the scale of a whole catchment. Our results emphasize the importance of channel width in controlling fluvial processes and landscape evolution. They stress the need for using a dynamic hydraulic scaling law when modeling landscape evolution, particularly when the uplift field is non-uniform.

  20. Use of generalised Procrustes analysis for the photogrammetric block adjustment by independent models

    NASA Astrophysics Data System (ADS)

    Crosilla, Fabio; Beinat, Alberto

    The paper reviews at first some aspects of the generalised Procrustes analysis (GP) and outlines the analogies with the block adjustment by independent models. On this basis, an innovative solution of the block adjustment problem by Procrustes algorithms and the related computer program implementation are presented and discussed. The main advantage of the new proposed method is that it avoids the conventional least squares solution. For this reason, linearisation algorithms and the knowledge of a priori approximate values for the unknown parameters are not required. Once the model coordinates of the tie points are available and at least three control points are known, the Procrustes algorithms can directly provide, without further information, the tie point ground coordinates and the exterior orientation parameters. Furthermore, some numerical block adjustment solutions obtained by the new method in different areas of North Italy are compared to the conventional solution. The very simple data input process, the less memory requirements, the low computing time and the same level of accuracy that characterise the new algorithm with respect to a conventional one are verified with these tests. A block adjustment of 11 models, with 44 tie points and 14 control points, takes just a few seconds on an Intel PIII 400 MHz computer, and the total data memory required is less than twice the allocated space for the input data. This is because most of the computations are carried out on data matrices of limited size, typically 3×3.

  1. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    SciTech Connect

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.

  2. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    DOE PAGES

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less

  3. Assessment of an adjustment factor to model radar range dependent error

    NASA Astrophysics Data System (ADS)

    Sebastianelli, S.; Russo, F.; Napolitano, F.; Baldini, L.

    2012-09-01

    Quantitative radar precipitation estimates are affected by errors determined by many causes such as radar miscalibration, range degradation, attenuation, ground clutter, variability of Z-R relation, variability of drop size distribution, vertical air motion, anomalous propagation and beam-blocking. Range degradation (including beam broadening and sampling of precipitation at an increasing altitude)and signal attenuation, determine a range dependent behavior of error. The aim of this work is to model the range-dependent error through an adjustment factor derived from the G/R ratio trend against the range, where G and R are the corresponding rain gauge and radar rainfall amounts computed at each rain gauge location. Since range degradation and signal attenuation effects are negligible close to the radar, resultsshowthatwithin 40 km from radar the overall range error is independent of the distance from Polar 55C and no range-correction is needed. Nevertheless, up to this distance, the G/R ratiocan showa concave trend with the range, which is due to the melting layer interception by the radar beam during stratiform events.

  4. Adjustable box-wing model for solar radiation pressure impacting GPS satellites

    NASA Astrophysics Data System (ADS)

    Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.

    2012-04-01

    One of the major uncertainty sources affecting Global Positioning System (GPS) satellite orbits is the direct solar radiation pressure. In this paper a new model for the solar radiation pressure on GPS satellites is presented that is based on a box-wing satellite model, and assumes nominal attitude. The box-wing model is based on the physical interaction between solar radiation and satellite surfaces, and can be adjusted to fit the GPS tracking data. To compensate the effects of solar radiation pressure, the International GNSS Service (IGS) analysis centers employ a variety of approaches, ranging from purely empirical models based on in-orbit behavior, to physical models based on pre-launch spacecraft structural analysis. It has been demonstrated, however, that the physical models fail to predict the real orbit behavior with sufficient accuracy, mainly due to deviations from nominal attitude, inaccurately known optical properties, or aging of the satellite surfaces. The adjustable box-wing model presented in this paper is an intermediate approach between the physical/analytical models and the empirical models. The box-wing model fits the tracking data by adjusting mainly the optical properties of the satellite's surfaces. In addition, the so called Y-bias and a parameter related to a rotation lag angle of the solar panels around their rotation axis (about 1.5° for Block II/IIA and 0.5° for Block IIR) are estimated. This last parameter, not previously identified for GPS satellites, is a key factor for precise orbit determination. For this study GPS orbits are generated based on one year (2007) of tracking data, with the processing scheme derived from the Center for Orbit Determination in Europe (CODE). Two solutions are computed, one using the adjustable box-wing model and one using the CODE empirical model. Using this year of data the estimated parameters and orbits are analyzed. The performance of the models is comparable, when looking at orbit overlap and orbit

  5. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  6. Evaluating plume dispersion models: Expanding the practice to include the model physics

    SciTech Connect

    Weil, J.C.

    1994-12-31

    Plume dispersion models are used in a variety of air-quality applications including the determination of source emission limits, new source sites, etc. The cost of pollution control and siting has generated much interest in model evaluation and accuracy. Two questions are of primary concern: (1) How well does a model predict the high ground-level concentrations (GLCs) that are necessary in assessing compliance with air-quality regulations? This prompts an operational performance evaluation; (2) Is the model based on sound physical principles and does it give good predictions for the {open_quotes}right{close_quotes} reasons? This prompts a model physics evaluation. Although air-quality managers are interested primarily in operational performance, model physics should be an equally important issue. The purpose in establishing good physics is to build confidence in model predictions beyond the limited experimental range, i.e., for new source applications.

  7. Executive function and psychosocial adjustment in healthy children and adolescents: A latent variable modelling investigation.

    PubMed

    Cassidy, Adam R

    2016-01-01

    The objective of this study was to establish latent executive function (EF) and psychosocial adjustment factor structure, to examine associations between EF and psychosocial adjustment, and to explore potential development differences in EF-psychosocial adjustment associations in healthy children and adolescents. Using data from the multisite National Institutes of Health (NIH) magnetic resonance imaging (MRI) Study of Normal Brain Development, the current investigation examined latent associations between theoretically and empirically derived EF factors and emotional and behavioral adjustment measures in a large, nationally representative sample of children and adolescents (7-18 years old; N = 352). Confirmatory factor analysis (CFA) was the primary method of data analysis. CFA results revealed that, in the whole sample, the proposed five-factor model (Working Memory, Shifting, Verbal Fluency, Externalizing, and Internalizing) provided a close fit to the data, χ(2)(66) = 114.48, p < .001; RMSEA = .046; NNFI = .973; CFI = .980. Significant negative associations were demonstrated between Externalizing and both Working Memory and Verbal Fluency (p < .01) factors. A series of increasingly restrictive tests led to the rejection of the hypothesis of invariance, thereby precluding formal statistical examination of age-related differences in latent EF-psychosocial adjustment associations. Findings indicate that childhood EF skills are best conceptualized as a constellation of interconnected yet distinguishable cognitive self-regulatory skills. Individual differences in certain domains of EF track meaningfully and in expected directions with emotional and behavioral adjustment indices. Externalizing behaviors, in particular, are associated with latent Working Memory and Verbal Fluency factors. PMID:25569593

  8. Validity of methods for model selection, weighting for model uncertainty, and small sample adjustment in capture-recapture estimation.

    PubMed

    Hook, E B; Regal, R R

    1997-06-15

    In log-linear capture-recapture approaches to population size, the method of model selection may have a major effect upon the estimate. In addition, the estimate may also be very sensitive if certain cells are null or very sparse, even with the use of multiple sources. The authors evaluated 1) various approaches to the issue of model uncertainty and 2) a small sample correction for three or more sources recently proposed by Hook and Regal. The authors compared the estimates derived using 1) three different information criteria that included Akaike's Information Criterion (AIC) and two alternative formulations of the Bayesian Information Criterion (BIC), one proposed by Draper ("two pi") and one by Schwarz ("not two pi"); 2) two related methods of weighting estimates associated with models; 3) the independent model; and 4) the saturated model, with the known totals in 20 different populations studied by five separate groups of investigators. For each method, we also compared the estimate derived with or without the proposed small sample correction. At least in these data sets, the use of AIC appeared on balance to be preferable. The BIC formulation suggested by Draper appeared slightly preferable to that suggested by Schwarz. Adjustment for model uncertainty appears to improve results slightly. The proposed small sample correction appeared to diminish relative log bias but only when sparse cells were present. Otherwise, its use tended to increase relative log bias. Use of the saturated model (with or without the small sample correction) appears to be optimal if the associated interval is not uselessly large, and if one can plausibly exclude an all-source interaction. All other approaches led to an estimate that was too low by about one standard deviation.

  9. Adjusting exposure limits for long and short exposure periods using a physiological pharmacokinetic model

    SciTech Connect

    Andersen, M.E.; MacNaughton, M.G.; Clewell, H.J. III; Paustenbach, D.J.

    1987-04-01

    This paper advocates use of a physiologically-based pharmacokinetic (PB-PK) model for determining adjustment factors for unusual exposure schedules. The PB-PK model requires data on the blood:air and tissue:blood partition coefficients, the rate of metabolism of the chemical, organ volumes, organ blood flows and ventilation rates in humans. Laboratory data on two industrially important chemicals - styrene and methylene chloride - were used to illustrate the PB-PK approach. At inhaled concentrations near their respective 8-hr Threshold Limit Value - Time-weighted averages both of these chemicals are primarily eliminated from the body by metabolism. For these two chemicals, the appropriate risk indexing parameters are integrated tissue dose or total amount of parent chemical metabolized. These examples also illustrate how the model can be used to calculate risk based on various other measures of delivered dose. For the majority of volatile chemicals, the parameter most closely associated with risk is the integrated tissue dose. This analysis suggests that when pharmacokinetic data are not available, a simple inverse formula may be sufficient for adjustment in most instances and application of complex kinetic models unnecessary. At present, this PB-PK approach is recommended only for exposure periods of 4 to 16 hr/day, because the mechanisms of toxicity for some chemicals may vary for very short- or very long-term exposures. For these altered schedules, more biological information on recovery in rest periods and changing mechanisms of toxicity are necessary before any adjustment is attempted.

  10. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    NASA Astrophysics Data System (ADS)

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  11. A stress and coping model of adjustment to caring for an adult with mental illness.

    PubMed

    Mackay, Christina; Pakenham, Kenneth I

    2012-08-01

    This study investigated the utility of a stress and coping framework for identifying factors associated with adjustment to informal caregiving to adults with mental illness. Relations between stress and coping predictors and negative (distress) and positive (positive affect, life satisfaction, benefit finding, health) carer adjustment outcomes were examined. A total of 114 caregivers completed questionnaires. Predictors included relevant background variables (carer and care recipient characteristics and caregiving context), coping resources (optimism, social support, carer-care recipient relationship quality), appraisal (threat, control, challenge) and coping strategies (problem-focused, avoidance, acceptance, meaning-focused). Results indicated that after controlling for relevant background variables (burden, caregiving frequency, care recipient symptom unpredictability), better caregiver adjustment was related to higher social support and optimism, better quality of carer-care recipient relationship, lower threat and higher challenge appraisals, and less reliance on avoidance coping, as hypothesised. Coping resources emerged as the most consistent predictor of adjustment. Findings support the utility of stress and coping theory in identifying risk and protective factors associated with adaptation to caring for an adult with mental illness.

  12. NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas

    2016-04-01

    Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.

  13. Radar adjusted data versus modelled precipitation: a case study over Cyprus

    NASA Astrophysics Data System (ADS)

    Casaioli, M.; Mariani, S.; Accadia, C.; Gabella, M.; Michaelides, S.; Speranza, A.; Tartaglione, N.

    2006-01-01

    In the framework of the European VOLTAIRE project (Fifth Framework Programme), simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM) forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM) satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA) analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.

  14. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    ERIC Educational Resources Information Center

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  15. An assessment of the ICE6G_C (VM5A) glacial isostatic adjustment model

    NASA Astrophysics Data System (ADS)

    Purcell, Anthony; Tregoning, Paul; Dehecq, Amaury

    2016-04-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a) [Peltier et al., 2015, Argus et al. 2014] is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology and, of course, geodynamics (Earth rheology studies). In this presentation I will assess some aspects of the ICE6G_C(VM5a) model and the accompanying published data sets. I will demonstrate that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Further, the published spherical harmonic coefficients - which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA) - will be shown to contain excessive power for degree ≥ 90, to be physically implausible and to not represent accurately the ICE6G_C(VM5a) model. The excessive power in the high degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. [2011] is applied but, when correct Stokes' coefficients are used, the empirical relationship will be shown to produce excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. [2011]. Finally, a global radial velocity field for the present-day GIA signal, and corresponding Stoke's coefficients will be presented for the ICE6GC ice model history using the VM5a rheology model. These results have been obtained using the ANU group's CALSEA software package and can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals without any of the shortcomings of the previously published data-sets. We denote the new data sets ICE6G_ANU.

  16. Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.

    PubMed

    Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao

    2016-05-18

    To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory.

  17. Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models.

    PubMed

    Krivitsky, Pavel N; Handcock, Mark S; Morris, Martina

    2011-07-01

    Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424

  18. Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models

    PubMed Central

    Krivitsky, Pavel N.; Handcock, Mark S.; Morris, Martina

    2011-01-01

    Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424

  19. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  20. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  1. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    PubMed

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. PMID:26979885

  2. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    NASA Astrophysics Data System (ADS)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every

  3. Impacts of Parameters Adjustment of Relativistic Mean Field Model on Neutron Star Properties

    NASA Astrophysics Data System (ADS)

    Kasmudin; Sulaksono, A.

    Analysis of the parameters adjustment effects in isovector as well as in isoscalar sectors of effective field based relativistic mean field (E-RMF) model in the symmetric nuclear matter and neutron-rich matter properties has been performed. The impacts of the adjustment on slowly rotating neutron star are systematically investigated. It is found that the mass-radius relation obtained from adjusted parameter set G2** is compatible not only with neutron stars masses from 4U 0614+09 and 4U 1636-536, but also with the ones from thermal radiation measurement in RX J1856 and with the radius range of canonical neutron star of X7 in 47 Tuc, respectively. It is also found that the moment inertia of PSR J073-3039A and the strain amplitude of gravitational wave at the Earth's vicinity of PSR J0437-4715 as predicted by the E-RMF parameter sets used are in reasonable agreement with the extracted constraints of these observations from isospin diffusion data.

  4. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    PubMed

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  5. A self-adjusted Monte Carlo simulation as a model for financial markets with central regulation

    NASA Astrophysics Data System (ADS)

    Horváth, Denis; Gmitra, Martin; Kuscsik, Zoltán

    2006-03-01

    Properties of the self-adjusted Monte Carlo algorithm applied to 2d Ising ferromagnet are studied numerically. The endogenous feedback form expressed in terms of the instant running averages is suggested in order to generate a biased random walk of the temperature that converges to criticality without an external tuning. The robustness of a stationary regime with respect to partial accessibility of the information is demonstrated. Several statistical and scaling aspects have been identified which allow to establish an alternative spin lattice model of the financial market. It turns out that our model alike model suggested by Bornholdt [Int. J. Mod. Phys. C 12 (2001) 667], may be described by Lévy-type stationary distribution of feedback variations with unique exponent α1∼3.3. However, the differences reflected by Hurst exponents suggest that resemblances between the studied models seem to be non-trivial.

  6. Including Microbial Acclimation in Carbon Cycle Models: Letting Data Guide Model Development (Invited)

    NASA Astrophysics Data System (ADS)

    Mayes, M. A.; Wang, G.; Tang, G.; Xu, X.; Jagadamma, S.

    2013-12-01

    Carbon cycle models are traditionally parameterized with ad hoc soil pools, empirical decay constants and first-order decomposition as a function of substrate supply. Decomposition of vegetative and faunal inputs, however, involves enzymatically-facilitated depolymerization by the microbial community. Traditional soil models are calibrated to match existing distribution of soil carbon, but they are not parameterized to predict the response of soil carbon to climate change due to microbial community shifts or physiological changes, i.e., acclimation. As an example, we will show how the temperature sensitivity of carbon use efficiency can influence the decomposition of different substrates and affect the release of CO2 from soil organic matter. Acclimation to warmer conditions could also involve shifts in microbial community composition or function, e.g., fungi: bacteria ratio shift. Experimental data is needed to decide how to parameterize models to accommodate functional or compositional changes. We will explore documented cases of microbial acclimation to warming, discuss methods to include microbial acclimation in carbon cycle models, and explore the need for additional experimental data to validate the next generation of microbially-facilitated carbon cycle models.

  7. Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method

    PubMed Central

    Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa

    2010-01-01

    Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419

  8. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    PubMed

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings.

  9. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    PubMed

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054

  10. Validation, Replication, and Sensitivity Testing of Heckman-Type Selection Models to Adjust Estimates of HIV Prevalence

    PubMed Central

    Clark, Samuel J.; Houle, Brian

    2014-01-01

    A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS) found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys. PMID:25402333

  11. Parental Depressive Symptoms and Adolescent Adjustment: A Prospective Test of an Explanatory Model for the Role of Marital Conflict

    PubMed Central

    Cummings, E. Mark; Cheung, Rebecca Y. M.; Koss, Kalsea; Davies, Patrick T.

    2014-01-01

    Despite calls for process-oriented models for child maladjustment due to heightened marital conflict in the context of parental depressive symptoms, few longitudinal tests of the mechanisms underlying these relations have been conducted. Addressing this gap, the present study examined multiple factors longitudinally that link parental depressive symptoms to adolescent adjustment problems, building on a conceptual model informed by emotional security theory (EST). Participants were 320 families (158 boys, 162 girls), including mothers and fathers, who took part when their children were in kindergarten (T1), second (T2), seventh (T3), eighth (T4) and ninth (T5) grades. Parental depressive symptoms (T1) were related to changes in adolescents’ externalizing and internalizing symptoms (T5), as mediated by parents’ negative emotional expressiveness (T2), marital conflict (T3), and emotional insecurity (T4). Evidence was thus advanced for emotional insecurity as an explanatory process in the context of parental depressive symptoms. PMID:24652484

  12. First-Year Village: Experimenting with an African Model for First-Year Adjustment and Support in South Africa

    ERIC Educational Resources Information Center

    Speckman, McGlory

    2016-01-01

    Predicated on the principles of success and contextuality, this chapter shares an African perspective on a first-year adjustment programme, known as First-Year Village, including its potential and challenges in establishing it.

  13. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  14. Stress and personal resource as predictors of the adjustment of parents to autistic children: a multivariate model.

    PubMed

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-07-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires measuring parental stress, personal resources (sense of coherence, locus of control, social support) adjustment (mental health and marriage quality) and the child's autism symptoms. Path analysis showed that sense of coherence, internal locus of control, social support and quality of marriage increase the ability to cope with the stress of parenting an autistic child. Directions for further research are suggested.

  15. Size-selection initiation model extended to include shape and random factors

    SciTech Connect

    Trenholme, J B; Feit, M D; Rubenchik, A M

    2005-11-02

    The Feit-Rubenchik size-selection damage model has been extended in a number of ways. More realistic thermal deposition profiles have been added. Non-spherical shapes (rods and plates) have been considered, with allowance for their orientation dependence. Random variations have been taken into account. An explicit form for the change of absorptivity with precursor size has been added. A simulation tool called GIDGET has been built to allow adjustment of the many possible parameters in order to fit experimental data of initiation density as a function of fluence and pulse duration. The result is a set of constraints on the possible properties of initiation precursors.

  16. Risk-adjusted capitation funding models for chronic disease in Australia: alternatives to casemix funding.

    PubMed

    Antioch, K M; Walsh, M K

    2002-01-01

    Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Prob<0.0001. Regression coefficients represent the additional per patient costs summed to the base payment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost

  17. Comparative Flow Dynamics in Two In Vitro Models of an Adjustable Systemic-Pulmonary Artery Shunt

    NASA Astrophysics Data System (ADS)

    Brown, Tim; Bates, Nathan; Douglas, William; Knapp, Charles; Jacob, Jamey

    2002-11-01

    Systemic-pulmonary artery (SPA) shunts are connections that exist to augment pulmonary blood flow in neonates born with single ventricle physiology. An appropriate balance between the systemic and pulmonary circulations is crucial to their survival. To achieve this, an adjustable SPA shunt is being developed at our institution that consists of a 4 mm PTFE tube with a screw plunger mechanism to achieve the desired change in flow rate by increasing pulmonary resistance. To determine the effect this mechanism has on flow patterns, two in vitro models were created; an idealized model with an axisymmetric constriction and a model developed from flow phantoms of the actual shunt under various actuations. These models were used to measure the instantaneous velocity and vorticity fields using PIV. Recirculation regions downstream of the constriction were observed for both models. For the idealized model, a separation region persisted for approximately 2-5 diameters downstream with a flow range between 600-850 cc/min, corresponding to in vivo conditions and a Re of approximately 1000-1500. In the realistic test sections, shedding vortices were visible 2.5 diameters downstream on the opposing side of the imposed constriction. The flow field structure and wall skin friction of the two cases under various conditions will be discussed.

  18. Measuring and modeling the lifetime of nitrous oxide including its variability

    NASA Astrophysics Data System (ADS)

    Prather, Michael J.; Hsu, Juno; DeLuca, Nicole M.; Jackman, Charles H.; Oman, Luke D.; Douglass, Anne R.; Fleming, Eric L.; Strahan, Susan E.; Steenrod, Stephen D.; Søvde, O. Amund; Isaksen, Ivar S. A.; Froidevaux, Lucien; Funke, Bernd

    2015-06-01

    The lifetime of nitrous oxide, the third-most-important human-emitted greenhouse gas, is based to date primarily on model studies or scaling to other gases. This work calculates a semiempirical lifetime based on Microwave Limb Sounder satellite measurements of stratospheric profiles of nitrous oxide, ozone, and temperature; laboratory cross-section data for ozone and molecular oxygen plus kinetics for O(1D); the observed solar spectrum; and a simple radiative transfer model. The result is 116 ± 9 years. The observed monthly-to-biennial variations in lifetime and tropical abundance are well matched by four independent chemistry-transport models driven by reanalysis meteorological fields for the period of observation (2005-2010), but all these models overestimate the lifetime due to lower abundances in the critical loss region near 32 km in the tropics. These models plus a chemistry-climate model agree on the nitrous oxide feedback factor on its own lifetime of 0.94 ± 0.01, giving N2O perturbations an effective residence time of 109 years. Combining this new empirical lifetime with model estimates of residence time and preindustrial lifetime (123 years) adjusts our best estimates of the human-natural balance of emissions today and improves the accuracy of projected nitrous oxide increases over this century.

  19. The use of satellites in gravity field determination and model adjustment

    NASA Astrophysics Data System (ADS)

    Visser, Petrus Nicolaas Anna Maria

    1992-06-01

    Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.

  20. The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.

  1. UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA

    SciTech Connect

    Davis, S.C.

    2000-11-16

    The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.

  2. The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment

    ERIC Educational Resources Information Center

    Borja, Susan E.; Callahan, Jennifer L.

    2009-01-01

    This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

  3. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  4. A structural model of the relationships among self-efficacy, psychological adjustment, and physical condition in Japanese advanced cancer patients.

    PubMed

    Hirai, Kei; Suzuki, Yoko; Tsuneto, Satoru; Ikenaga, Masayuki; Hosaka, Takashi; Kashiwagi, Tetsuo

    2002-01-01

    We made detailed research for relationships among physical condition, self-efficacy and psychological adjustment of patients with advanced cancer in Japan. The sample consisted of 85 (42 males and 43 females) advanced cancer patients. Interviews were conducted with some measurement scales including the Self-efficacy scale for Advanced Cancer (SEAC), and the Hospital Anxiety and Depression Scale (HADS). Karnofsky Performance Status (KPS) and medication status were also recorded from the evaluation by physicians. We used structural equation modeling (SEM) for statistical analysis. The analysis revealed that the model, including three self-efficacy subscales, depression, anxiety, KPS, meal-, liquid-intake, prognosis and three latent variables: 'Self-efficacy', 'Emotional Distress', and 'Physical Condition,' fit the data (chi-square(24)=28.67, p=0.23; GFI=0.93; CFI=0.98; RMSEA=0.05). In this model, self-efficacy accounted for 71% of the variance in emotional distress and physical condition accounted for 8% of the variance in self-efficacy. Overall, our findings suggest clearly that close relationships existed among physical condition, self-efficacy and emotional distress. That is, patients in good physical condition had a high self-efficacy, and patients with high self-efficacy were less emotionally distressed. These results imply that psychological intervention which emphasizes self-efficacy would be effective for advanced cancer patients.

  5. Toward a Transactional Model of Parent-Adolescent Relationship Quality and Adolescent Psychological Adjustment

    ERIC Educational Resources Information Center

    Fanti, Kostas A.; Henrich, Christopher C.; Brookmeyer, Kathryn A.; Kuperminc, Gabriel P.

    2008-01-01

    The present study includes externalizing problems, internalizing problems, mother-adolescent relationship quality, and father-adolescent relationship quality in the same structural equation model and tests the longitudinal reciprocal association among all four variables over a 1-year period. A transactional model in which adolescents'…

  6. Including nonequilibrium interface kinetics in a continuum model for melting nanoscaled particles

    PubMed Central

    Back, Julian M.; McCue, Scott W.; Moroney, Timothy J.

    2014-01-01

    The melting temperature of a nanoscaled particle is known to decrease as the curvature of the solid-melt interface increases. This relationship is most often modelled by a Gibbs–Thomson law, with the decrease in melting temperature proposed to be a product of the curvature of the solid-melt interface and the surface tension. Such a law must break down for sufficiently small particles, since the curvature becomes singular in the limit that the particle radius vanishes. Furthermore, the use of this law as a boundary condition for a Stefan-type continuum model is problematic because it leads to a physically unrealistic form of mathematical blow-up at a finite particle radius. By numerical simulation, we show that the inclusion of nonequilibrium interface kinetics in the Gibbs–Thomson law regularises the continuum model, so that the mathematical blow up is suppressed. As a result, the solution continues until complete melting, and the corresponding melting temperature remains finite for all time. The results of the adjusted model are consistent with experimental findings of abrupt melting of nanoscaled particles. This small-particle regime appears to be closely related to the problem of melting a superheated particle. PMID:25399918

  7. A finite element model of the face including an orthotropic skin model under in vivo tension.

    PubMed

    Flynn, Cormac; Stavness, Ian; Lloyd, John; Fels, Sidney

    2015-01-01

    Computer models of the human face have the potential to be used as powerful tools in surgery simulation and animation development applications. While existing models accurately represent various anatomical features of the face, the representation of the skin and soft tissues is very simplified. A computer model of the face is proposed in which the skin is represented by an orthotropic hyperelastic constitutive model. The in vivo tension inherent in skin is also represented in the model. The model was tested by simulating several facial expressions by activating appropriate orofacial and jaw muscles. Previous experiments calculated the change in orientation of the long axis of elliptical wounds on patients' faces for wide opening of the mouth and an open-mouth smile (both 30(o)). These results were compared with the average change of maximum principal stress direction in the skin calculated in the face model for wide opening of the mouth (18(o)) and an open-mouth smile (25(o)). The displacements of landmarks on the face for four facial expressions were compared with experimental measurements in the literature. The corner of the mouth in the model experienced the largest displacement for each facial expression (∼11-14 mm). The simulated landmark displacements were within a standard deviation of the measured displacements. Increasing the skin stiffness and skin tension generally resulted in a reduction in landmark displacements upon facial expression. PMID:23919890

  8. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  9. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  10. Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint

    SciTech Connect

    Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.

    2015-04-06

    Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.

  11. Joint Alignment of Underwater and Above-The Photogrammetric 3d Models by Independent Models Adjustment

    NASA Astrophysics Data System (ADS)

    Menna, F.; Nocerino, E.; Troisi, S.; Remondino, F.

    2015-04-01

    The surveying and 3D modelling of objects that extend both below and above the water level, such as ships, harbour structures, offshore platforms, are still an open issue. Commonly, a combined and simultaneous survey is the adopted solution, with acoustic/optical sensors respectively in underwater and in air (most common) or optical/optical sensors both below and above the water level. In both cases, the system must be calibrated and a ship is to be used and properly equipped with also a navigation system for the alignment of sequential 3D point clouds. Such a system is usually highly expensive and has been proved to work with still structures. On the other hand for free floating objects it does not provide a very practical solution. In this contribution, a flexible, low-cost alternative for surveying floating objects is presented. The method is essentially based on photogrammetry, employed for surveying and modelling both the emerged and submerged parts of the object. Special targets, named Orientation Devices, are specifically designed and adopted for the successive alignment of the two photogrammetric models (underwater and in air). A typical scenario where the proposed procedure can be particularly suitable and effective is the case of a ship after an accident whose damaged part is underwater and necessitate to be measured (Figure 1). The details of the mathematical procedure are provided in the paper, together with a critical explanation of the results obtained from the adoption of the method for the survey of a small pleasure boat in floating condition.

  12. A Verilog-A large signal model for InP DHBT including thermal effects

    NASA Astrophysics Data System (ADS)

    Yuxia, Shi; Zhi, Jin; Zhijian, Pan; Yongbo, Su; Yuxiong, Cao; Yan, Wang

    2013-06-01

    A large signal model for InP/InGaAs double heterojunction bipolar transistors including thermal effects has been reported, which demonstrated good agreements of simulations with measurements. On the basis of the previous model in which the double heterojunction effect, current blocking effect and high current effect in current expression are considered, the effect of bandgap narrowing with temperature has been considered in transport current while a formula for model parameters as a function of temperature has been developed. This model is implemented by Verilog-A and embedded in ADS. The proposed model is verified with DC and large signal measurements.

  13. Modelling Mediterranean agro-ecosystems by including agricultural trees in the LPJmL model

    NASA Astrophysics Data System (ADS)

    Fader, M.; von Bloh, W.; Shi, S.; Bondeau, A.; Cramer, W.

    2015-11-01

    In the Mediterranean region, climate and land use change are expected to impact on natural and agricultural ecosystems by warming, reduced rainfall, direct degradation of ecosystems and biodiversity loss. Human population growth and socioeconomic changes, notably on the eastern and southern shores, will require increases in food production and put additional pressure on agro-ecosystems and water resources. Coping with these challenges requires informed decisions that, in turn, require assessments by means of a comprehensive agro-ecosystem and hydrological model. This study presents the inclusion of 10 Mediterranean agricultural plants, mainly perennial crops, in an agro-ecosystem model (Lund-Potsdam-Jena managed Land - LPJmL): nut trees, date palms, citrus trees, orchards, olive trees, grapes, cotton, potatoes, vegetables and fodder grasses. The model was successfully tested in three model outputs: agricultural yields, irrigation requirements and soil carbon density. With the development presented in this study, LPJmL is now able to simulate in good detail and mechanistically the functioning of Mediterranean agriculture with a comprehensive representation of ecophysiological processes for all vegetation types (natural and agricultural) and in a consistent framework that produces estimates of carbon, agricultural and hydrological variables for the entire Mediterranean basin. This development paves the way for further model extensions aiming at the representation of alternative agro-ecosystems (e.g. agroforestry), and opens the door for a large number of applications in the Mediterranean region, for example assessments of the consequences of land use transitions, the influence of management practices and climate change impacts.

  14. Modelling Mediterranean agro-ecosystems by including agricultural trees in the LPJmL model

    NASA Astrophysics Data System (ADS)

    Fader, M.; von Bloh, W.; Shi, S.; Bondeau, A.; Cramer, W.

    2015-06-01

    Climate and land use change in the Mediterranean region is expected to affect natural and agricultural ecosystems by decreases in precipitation, increases in temperature as well as biodiversity loss and anthropogenic degradation of natural resources. Demographic growth in the Eastern and Southern shores will require increases in food production and put additional pressure on agro-ecosystems and water resources. Coping with these challenges requires informed decisions that, in turn, require assessments by means of a comprehensive agro-ecosystem and hydrological model. This study presents the inclusion of 10 Mediterranean agricultural plants, mainly perennial crops, in an agro-ecosystem model (LPJmL): nut trees, date palms, citrus trees, orchards, olive trees, grapes, cotton, potatoes, vegetables and fodder grasses. The model was successfully tested in three model outputs: agricultural yields, irrigation requirements and soil carbon density. With the development presented in this study, LPJmL is now able to simulate in good detail and mechanistically the functioning of Mediterranean agriculture with a comprehensive representation of ecophysiological processes for all vegetation types (natural and agricultural) and in a consistent framework that produces estimates of carbon, agricultural and hydrological variables for the entire Mediterranean basin. This development pave the way for further model extensions aiming at the representation of alternative agro-ecosystems (e.g. agroforestry), and opens the door for a large number of applications in the Mediterranean region, for example assessments on the consequences of land use transitions, the influence of management practices and climate change impacts.

  15. Extension of the ADC Charge-Collection Model to Include Multiple Junctions

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    2011-01-01

    The ADC model is a charge-collection model derived for simple p-n junction silicon diodes having a single reverse-biased p-n junction at one end and an ideal substrate contact at the other end. The present paper extends the model to include multiple junctions, and the goal is to estimate how collected charge is shared by the different junctions.

  16. [Structural adjustment, cultural adjustment?].

    PubMed

    Dujardin, B; Dujardin, M; Hermans, I

    2003-12-01

    Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.

  17. A constitutive model for the forces of a magnetic bearing including eddy currents

    NASA Technical Reports Server (NTRS)

    Taylor, D. L.; Hebbale, K. V.

    1993-01-01

    A multiple magnet bearing can be developed from N individual electromagnets. The constitutive relationships for a single magnet in such a bearing is presented. Analytical expressions are developed for a magnet with poles arranged circumferencially. Maxwell's field equations are used so the model easily includes the effects of induced eddy currents due to the rotation of the journal. Eddy currents must be included in any dynamic model because they are the only speed dependent parameter and may lead to a critical speed for the bearing. The model is applicable to bearings using attraction or repulsion.

  18. Isospin mixing within relativistic mean-field models including the delta meson

    NASA Astrophysics Data System (ADS)

    Graeff, C. A.; Marinelli, J. R.

    2011-09-01

    We investigate isospin mixing effects in the asymmetry as obtained in parity-violating electron scattering from 4He, 12C, 16O, 40Ca and 56Ni. The scattering analysis is developed within plane (PWBA) and distorted wave (DWBA) Born approximations accounting for nucleon form factors, which are given by the Galster parametrization. We use Walecka's Model (QHD), including the σ, ω, ρ and δ mesons as well as the electromagnetic interaction. The δ meson effects are specially interesting once it should add a contribution for isospin mixing together with the electromagnetic and ρ meson fields. Our model includes lagrangians with nonlinear terms as well as lagrangians including density dependent couplings. The model is solved in a Hartree approximation with spherical symmetry using a self-consistent calculation by means of an expansion of the nuclear wave functions and potentials in an harmonic oscillator basis. Results using four different parametrizations are obtained and compared with calculations using non-relativistic models.

  19. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    SciTech Connect

    Konya, Andras

    2006-12-15

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan{sup TM} (TX) and the Texan LONGhorn{sup TM} (TX-LG), in a swine model. Both devices feature a {<=}30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean {+-} SD, 88 {+-} 106 sec for TX vs 67 {+-} 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces.

  20. Adjusting for Health Status in Non-Linear Models of Health Care Disparities

    PubMed Central

    Cook, Benjamin L.; McGuire, Thomas G.; Meara, Ellen; Zaslavsky, Alan M.

    2009-01-01

    This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice. PMID:20352070

  1. Adjusting for Health Status in Non-Linear Models of Health Care Disparities.

    PubMed

    Cook, Benjamin L; McGuire, Thomas G; Meara, Ellen; Zaslavsky, Alan M

    2009-03-01

    This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice.

  2. Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Barnhoorn, Auke; Stocchi, Paolo; Gradmann, Sofie; Wu, Patrick; Drury, Martyn; Vermeersen, Bert

    2013-07-01

    Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes a single deformation mechanism for mantle rocks. Such a simplified viscosity profile makes it hard to compare the inferred mantle rheology to inferences from seismology and laboratory experiments. It is unknown what constraints GIA observations can provide on more realistic mantle rheology with an ice history that is not based on an a priori mantle viscosity profile. This paper investigates a model for GIA with a new ice history for Fennoscandia that is constrained by palaeoclimate proxies and glacial sediments. Diffusion and dislocation creep flow law data are taken from a compilation of laboratory measurements on olivine. Upper-mantle temperature data sets down to 400 km depth are derived from surface heatflow measurements, a petrochemical model for Fennoscandia and seismic velocity anomalies. Creep parameters below 400 km are taken from an earlier study and are only varying with depth. The olivine grain size and water content (a wet state, or a dry state) are used as free parameters. The solid Earth response is computed with a global spherical 3-D finite-element model for an incompressible, self-gravitating Earth. We compare predictions to sea level data and GPS uplift rates in Fennoscandia. The objective is to see if the mantle rheology and the ice model is consistent with GIA observations. We also test if the inclusion of dislocation creep gives any improvements over predictions with diffusion creep only, and whether the laterally varying temperatures result in an improved fit compared to a widely used 1-D viscosity profile (VM2). We find that sea level data can be explained with our ice model and with information on mantle rheology from laboratory experiments

  3. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  4. The timing of the Black Sea flood event: Insights from modeling of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Goldberg, Samuel L.; Lau, Harriet C. P.; Mitrovica, Jerry X.; Latychev, Konstantin

    2016-10-01

    We present a suite of gravitationally self-consistent predictions of sea-level change since Last Glacial Maximum (LGM) in the vicinity of the Bosphorus and Dardanelles straits that combine signals associated with glacial isostatic adjustment (GIA) and the flooding of the Black Sea. Our predictions are tuned to fit a relative sea level (RSL) record at the island of Samothrace in the north Aegean Sea and they include realistic 3-D variations in viscoelastic structure, including lateral variations in mantle viscosity and the elastic thickness of the lithosphere, as well as weak plate boundary zones. We demonstrate that 3-D Earth structure and the magnitude of the flood event (which depends on the pre-flood level of the lake) both have significant impact on the predicted RSL change at the location of the Bosphorus sill, and therefore on the inferred timing of the marine incursion. We summarize our results in a plot showing the predicted RSL change at the Bosphorus sill as a function of the timing of the flood event for different flood magnitudes up to 100 m. These results suggest, for example, that a flood event at 9 ka implies that the elevation of the sill was lowered through erosion by ∼14-21 m during, and after, the flood. In contrast, a flood event at 7 ka suggests erosion of ∼24-31 m at the sill since the flood. More generally, our results will be useful for future research aimed at constraining the details of this controversial, and widely debated geological event.

  5. Risk adjustment models for interhospital comparison of CS rates using Robson’s ten group classification system and other socio-demographic and clinical variables

    PubMed Central

    2012-01-01

    Background Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson’s Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. Methods The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V–X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. Results The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson’s classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). Conclusions The TGCS classification is useful for

  6. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  7. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  8. Rejection, Feeling Bad, and Being Hurt: Using Multilevel Modeling to Clarify the Link between Peer Group Aggression and Adjustment

    ERIC Educational Resources Information Center

    Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.

    2010-01-01

    The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…

  9. Internal Working Models and Adjustment of Physically Abused Children: The Mediating Role of Self-Regulatory Abilities

    ERIC Educational Resources Information Center

    Hawkins, Amy L.; Haskett, Mary E.

    2014-01-01

    Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…

  10. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    ERIC Educational Resources Information Center

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  11. The Effectiveness of the Strength-Centered Career Adjustment Model for Dual-Career Women in Taiwan

    ERIC Educational Resources Information Center

    Wang, Yu-Chen; Tien, Hsiu-Lan Shelley

    2011-01-01

    The authors investigated the effectiveness of a Strength-Centered Career Adjustment Model for dual-career women (N = 28). Fourteen women in the experimental group received strength-centered career counseling for 6 to 8 sessions; the 14 women in the control group received test services in 1 to 2 sessions. All participants completed the Personal…

  12. Modeling the performance of direct-detection Doppler lidar systems including cloud and solar background variability.

    PubMed

    McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D

    1999-10-20

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design. PMID:18324169

  13. A self-adjusting flow dependent formulation for the classical Smagorinsky model coefficient

    NASA Astrophysics Data System (ADS)

    Ghorbaniasl, G.; Agnihotri, V.; Lacor, C.

    2013-05-01

    In this paper, we propose an efficient formula for estimating the model coefficient of a Smagorinsky model based subgrid scale eddy viscosity. The method allows vanishing eddy viscosity through a vanishing model coefficient in regions where the eddy viscosity should be zero. The advantage of this method is that the coefficient of the subgrid scale model is a function of the flow solution, including the translational and the rotational velocity field contributions. Furthermore, the value of model coefficient is optimized without using the dynamic procedure thereby saving significantly on computational cost. In addition, the method guarantees the model coefficient to be always positive with low fluctuation in space and time. For validation purposes, three test cases are chosen: (i) a fully developed channel flow at {mathopRenolimits} _tau = 180, 395, (ii) a fully developed flow through a rectangular duct of square cross section at {mathopRenolimits} _tau = 300, and (iii) a smooth subcritical flow past a stationary circular cylinder, at a Reynolds number of {mathopRenolimits} = 3900, where the wake is fully turbulent but the cylinder boundary layers remain laminar. A main outcome is the good behavior of the proposed model as compared to reference data. We have also applied the proposed method to a CT-based simplified human upper airway model, where the flow is transient.

  14. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  15. Rabbit-Specific Ventricular Model of Cardiac Electrophysiological Function including Specialized Conduction System

    PubMed Central

    Bordas, R.; Gillow, K.; Lou, Q.; Efimov, I. R.; Gavaghan, D.; Kohl, P.; Grau, V.; Rodriguez, B.

    2011-01-01

    The function of the ventricular specialized conduction system in the heart is to ensure the coordinated electrical activation of the ventricles. It is therefore critical to the overall function of the heart, and has also been implicated as an important player in various diseases, including lethal ventricular arrhythmias such as ventricular fibrillation and drug-induced torsades de pointes. However, current ventricular models of electrophysiology usually ignore, or include highly simplified representations of the specialized conduction system. Here, we describe the development of a image-based, species-consistent, anatomically-detailed model of rabbit ventricular electrophysiology that incorporates a detailed description of the free-running part of the specialized conduction system. Techniques used for the construction of the geometrical model of the specialized conduction system from a magnetic resonance dataset and integration of the system model into a ventricular anatomical model, developed from the same dataset, are described. Computer simulations of rabbit ventricular electrophysiology are conducted using the novel anatomical model and rabbit-specific membrane kinetics to investigate the importance of the components and properties of the conduction system in determining ventricular function under physiological conditions. Simulation results are compared to panoramic optical mapping experiments for model validation and results interpretation. Full access is provided to the anatomical models developed in this study. PMID:21672547

  16. SAMI2-PE: A model of the ionosphere including multistream interhemispheric photoelectron transport

    NASA Astrophysics Data System (ADS)

    Varney, R. H.; Swartz, W. E.; Hysell, D. L.; Huba, J. D.

    2012-06-01

    In order to improve model comparisons with recently improved incoherent scatter radar measurements at the Jicamarca Radio Observatory we have added photoelectron transport and energy redistribution to the two dimensional SAMI2 ionospheric model. The photoelectron model uses multiple pitch angle bins, includes effects associated with curved magnetic field lines, and uses an energy degradation procedure which conserves energy on coarse, non-uniformly spaced energy grids. The photoelectron model generates secondary electron production rates and thermal electron heating rates which are then passed to the fluid equations in SAMI2. We then compare electron and ion temperatures and electron densities of this modified SAMI2 model with measurements of these parameters over a range of altitudes from 90 km to 1650 km (L = 1.26) over a 24 hour period. The new electron heating model is a significant improvement over the semi-empirical model used in SAMI2. The electron temperatures above the F-peak from the modified model qualitatively reproduce the shape of the measurements as functions of time and altitude and quantitatively agree with the measurements to within ˜30% or better during the entire day, including during the rapid temperature increase at dawn.

  17. Land surface hydrology parameterization for atmospheric general circulation models including subgrid scale spatial variability

    NASA Technical Reports Server (NTRS)

    Entekhabi, D.; Eagleson, P. S.

    1989-01-01

    Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.

  18. Analysis of a generalized model for influenza including differential susceptibility due to immunosuppression

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2014-06-01

    Recently, a mathematical model of pandemic influenza was proposed including typical control strategies such as antivirals, vaccination and school closure; and considering explicitly the effects of immunity acquired from the early outbreaks on the ulterior outbreaks of the disease. In such model the algebraic expression for the basic reproduction number (without control strategies) and the effective reproduction number (with control strategies) were derived and numerically estimated. A drawback of this model of pandemic influenza is that it ignores the effects of the differential susceptibility due to immunosuppression and the effects of the complexity of the actual contact networks between individuals. We have developed a generalized model which includes such effects of heterogeneity. Specifically we consider the influence of the air network connectivity in the spread of pandemic influenza and the influence of the immunosuppresion when the population is divided in two immune classes. We use an algebraic expression, namely the Tutte polynomial, to characterize the complexity of the contact network. Until now, The influence of the air network connectivity in the spread of pandemic influenza has been studied numerically, but not algebraic expressions have been used to summarize the level of network complexity. The generalized model proposed here includes the typical control strategies previously mentioned (antivirals, vaccination and school closure) combined with restrictions on travel. For the generalized model the corresponding reproduction numbers will be algebraically computed and the effect of the contact network will be established in terms of the Tutte polynomial of the network.

  19. Magnetofluid Simulations of the Global Solar Wind Including Pickup Ions and Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Goldstein, Melvyn L.; Usmanov, Arcadi V.; Matthaeus, William H.

    2011-01-01

    I will describe a three-dimensional magnetohydrodynamic model of the solar wind that takes into account turbulent heating of the wind by velocity and magnetic fluctuations as well as a variety of effects produced by interstellar pickup protons. In this report, the interstellar pickup protons are treated as one fluid and the protons and electrons are treated together as a second fluid. The model equations include a Reynolds decomposition of the plasma velocity and magnetic field into mean and fluctuating quantities, as well as energy transfer from interstellar pickup protons to solar wind protons that results in the deceleration of the solar wind. The model is used to simulate the global steady-state structure of the solar wind in the region from 0.3 to 100 AU. Where possible, the model is compared with Voyager data. Initial results from generalization to a three-fluid model is described elsewhere in this session.

  20. A statistical model including age to predict passenger postures in the rear seats of automobiles.

    PubMed

    Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J

    2016-06-01

    Few statistical models of rear seat passenger posture have been published, and none has taken into account the effects of occupant age. This study developed new statistical models for predicting passenger postures in the rear seats of automobiles. Postures of 89 adults with a wide range of age and body size were measured in a laboratory mock-up in seven seat configurations. Posture-prediction models for female and male passengers were separately developed by stepwise regression using age, body dimensions, seat configurations and two-way interactions as potential predictors. Passenger posture was significantly associated with age and the effects of other two-way interaction variables depended on age. A set of posture-prediction models are presented for women and men, and the prediction results are compared with previously published models. This study is the first study of passenger posture to include a large cohort of older passengers and the first to report a significant effect of age for adults. The presented models can be used to position computational and physical human models for vehicle design and assessment. Practitioner Summary: The significant effects of age, body dimensions and seat configuration on rear seat passenger posture were identified. The models can be used to accurately position computational human models or crash test dummies for older passengers in known rear seat configurations.

  1. A statistical model including age to predict passenger postures in the rear seats of automobiles.

    PubMed

    Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J

    2016-06-01

    Few statistical models of rear seat passenger posture have been published, and none has taken into account the effects of occupant age. This study developed new statistical models for predicting passenger postures in the rear seats of automobiles. Postures of 89 adults with a wide range of age and body size were measured in a laboratory mock-up in seven seat configurations. Posture-prediction models for female and male passengers were separately developed by stepwise regression using age, body dimensions, seat configurations and two-way interactions as potential predictors. Passenger posture was significantly associated with age and the effects of other two-way interaction variables depended on age. A set of posture-prediction models are presented for women and men, and the prediction results are compared with previously published models. This study is the first study of passenger posture to include a large cohort of older passengers and the first to report a significant effect of age for adults. The presented models can be used to position computational and physical human models for vehicle design and assessment. Practitioner Summary: The significant effects of age, body dimensions and seat configuration on rear seat passenger posture were identified. The models can be used to accurately position computational human models or crash test dummies for older passengers in known rear seat configurations. PMID:26328769

  2. The effects of coping on adjustment: Re-examining the goodness of fit model of coping effectiveness.

    PubMed

    Masel, C N; Terry, D J; Gribble, M

    1996-01-01

    Abstract The primary aim of the present study was to examine the extent to which the effects of coping on adjustment are moderated by levels of event controllability. Specifically, the research tested two revisions to the goodness of fit model of coping effectiveness. First, it was hypothesized that the effects of problem management coping (but not problem appraisal coping) would be moderated by levels of event controllability. Second, it was hypothesized that the effects of emotion-focused coping would be moderated by event controllability, but only in the acute phase of a stressful encounter. To test these predictions, a longitudinal study was undertaken (185 undergraduate students participated in all three stages of the research). Measures of initial adjustment (low depression and coping efficacy) were obtained at Time 1. Four weeks later (Time 2), coping responses to a current or a recent stressor were assessed. Based on subjects' descriptions of the event, objective and subjective measures of event controllability were also obtained. Measures of concurrent and subsequent adjustment were obtained at Times 2 and 3 (two weeks later), respectively. There was only weak support for the goodness of fit model of coping effectiveness. The beneficial effects of a high proportion of problem management coping (relative to total coping efforts) on Time 3 perceptions of coping efficacy were more evident in high control than in low control situations. Other results of the research revealed that, irrespective of the controllability of the event, problem appraisal coping strategies and emotion-focused strategies (escapism and self-denigration) were associated with high and low levels of concurrent adjustment, respectively. The effects of these coping responses on subsequent adjustment were mediated through concurrent levels of adjustment.

  3. New Models of CKD Care Including Pharmacists: Improving Medication Reconciliation and Medication Management

    PubMed Central

    St Peter, Wendy L.; Wazny, Lori D.; Patel, Uptal D.

    2014-01-01

    Purpose of review Chronic kidney disease patients are complex, have many medication-related problems (MRPs) and high rates of medication nonadherence, and are less adherent to some medications than patients with higher levels of kidney function. Nonadherence in CKD patients increases the odds of uncontrolled hypertension, which can increase the risk of CKD progression. This review discusses reasons for gaps in medication-related care for CKD patients, pharmacy services to reduce these gaps, and successful models that incorporate pharmacist care. Recent findings Pharmacists are currently being trained to deliver patient-centered care, including identification and management of MRPs and helping patients overcome barriers to improve medication adherence. A growing body of evidence indicates that pharmacist services for CKD patients, including medication reconciliation and medication therapy management, positively affect clinical and cost outcomes including lower rates of decline in glomerular filtration rates, reduced mortality, and fewer hospitalizations and hospital days, but more robust research is needed. Team-based models including pharmacists exist today and are being studied in a wide range of innovative care and reimbursement models. Summary Opportunities are growing to include pharmacists as integral members of CKD and dialysis healthcare teams to reduce MRPs, increase medication adherence, and improve patient outcomes. PMID:24076556

  4. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    NASA Astrophysics Data System (ADS)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  5. Innovative Liner Concepts: Experiments and Impedance Modeling of Liners Including the Effect of Bias Flow

    NASA Technical Reports Server (NTRS)

    Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris

    2000-01-01

    The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).

  6. Evaluating Modeled Variables Included in the NOAA Water Vapor Flux Tool

    NASA Astrophysics Data System (ADS)

    Darby, L. S.; White, A. B.; Coleman, T.

    2015-12-01

    The NOAA/ESRL/Physical Sciences Division has a Water Vapor Flux Tool showing observed and forecast meteorological variables related to heavy precipitation. Details about this tool will be presented in a companion paper by White et al. (2015, this conference). We evaluate 3-hr precipitation forecasts from four models (the HRRR, HRRRexp, RAP, and RAPexp) that were added to the tool in Dec. 2014. The Rapid Refresh (RAP) and the High-Resolution Rapid Refresh (HRRR) models are run operationally by NOAA, are initialized hourly, and produce forecasts out to 15 hours. The RAP and HRRR have experimental versions (RAPexp and HRRRexp, respectively) that are run near-real time at the NOAA/ESRL/Global Systems Division. Our analysis of eight rain days includes atmospheric river events in Dec. 2014 and Feb. 2015. We evaluate the forecasts using observations at two sites near the California coast - Bodega Bay (BBY, 15 m ASL) and Cazadero (CZC, 478 m ASL), and an inland site near Colfax, CA (CFC, 643 m ASL). Various criteria were used to evaluate the forecasts. (1) The Pielke criteria: we compare the RMSE and unbiased RMSE of the model output to the standard deviation of the observations, and we compare the standard deviation of the model output to the standard deviation of the observations; (2) we compare the modeled 24-hr precipitation to the observed 24-hr precipitation; and (3) we assess the correlation coefficient between the modeled and observed precipitation. Based on these criteria, the RAP slightly outperformed the other models. Only the RAP and the HRRRexp had forecasts that met the Pielke criteria. All of the models were able to predict the observed 24-hour precipitation, within 10%, in only 8-16% of their forecasts. All models achieved a correlation coefficient value above the 90th percentile in 12.5% of their forecasts. The station most likely to have a forecast that met any of the criteria was the inland mountain station CFC; the least likely was the coastal mountain

  7. MEMLS3&a: Microwave Emission Model of Layered Snowpacks adapted to include backscattering

    NASA Astrophysics Data System (ADS)

    Proksch, M.; Mätzler, C.; Wiesmann, A.; Lemmetyinen, J.; Schwank, M.; Löwe, H.; Schneebeli, M.

    2015-08-01

    The Microwave Emission Model of Layered Snowpacks (MEMLS) was originally developed for microwave emissions of snowpacks in the frequency range 5-100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like- and cross-polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS) is set up in a way that snow input parameters can be derived by objective measurement methods which avoid fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx (Nordic Snow Radar Experiment) campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in Matlab and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.

  8. Constraints of GRACE on the Ice Model and Mantle Rheology in Glacial Isostatic Adjustment Modeling in North-America

    NASA Astrophysics Data System (ADS)

    van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.

    2009-05-01

    GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice

  9. An exact stochastic hybrid model of excitable membranes including spatio-temporal evolution.

    PubMed

    Buckwar, Evelyn; Riedler, Martin G

    2011-12-01

    In this paper, we present a mathematical description for excitable biological membranes, in particular neuronal membranes. We aim to model the (spatio-) temporal dynamics, e.g., the travelling of an action potential along the axon, subject to noise, such as ion channel noise. Using the framework of Piecewise Deterministic Processes (PDPs) we provide an exact mathematical description-in contrast to pseudo-exact algorithms considered in the literature-of the stochastic process one obtains coupling a continuous time Markov chain model with a deterministic dynamic model of a macroscopic variable, that is coupling Markovian channel dynamics to the time-evolution of the transmembrane potential. We extend the existing framework of PDPs in finite dimensional state space to include infinite-dimensional evolution equations and thus obtain a stochastic hybrid model suitable for modelling spatio-temporal dynamics. We derive analytic results for the infinite-dimensional process, such as existence, the strong Markov property and its extended generator. Further, we exemplify modelling of spatially extended excitable membranes with PDPs by a stochastic hybrid version of the Hodgkin-Huxley model of the squid giant axon. Finally, we discuss the advantages of the PDP formulation in view of analytical and numerical investigations as well as the application of PDPs to structurally more complex models of excitable membranes. PMID:21243359

  10. A cerebrovascular response model for functional neuroimaging including dynamic cerebral autoregulation

    PubMed Central

    Diamond, Solomon Gilbert; Perdue, Katherine L.; Boas, David A.

    2009-01-01

    Functional neuroimaging techniques such as functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS) can be used to isolate an evoked response to a stimulus from significant background physiological fluctuations. Data analysis approaches typically use averaging or linear regression to remove this physiological baseline with varying degrees of success. Biophysical model-based analysis of the functional hemodynamic response has also been advanced previously with the Balloon and Windkessel models. In the present work, a biophysical model of systemic and cerebral circulation and gas exchange is applied to resting state NIRS neuroimaging data from 10 human subjects. The model further includes dynamic cerebral autoregulation, which modulates the cerebral arteriole compliance to control cerebral blood flow. This biophysical model allows for prediction, from noninvasive blood pressure measurements, of the background hemodynamic fluctuations in the systemic and cerebral circulations. Significantly higher correlations with the NIRS data were found using the biophysical model predictions compared to blood pressure regression and compared to transfer function analysis (multifactor ANOVA, p<0.0001). This finding supports the further development and use of biophysical models for removing baseline activity in functional neuroimaging analysis. Future extensions of this work could model changes in cerebrovascular physiology that occur during development, aging and disease. PMID:19442671

  11. Multifluid Simulations of the Global Solar Wind Including Pickup Ions and Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Goldstein, Melvyn L.; Usmanov, A. V.

    2011-01-01

    I will describe a three-dimensional magnetohydrodynamic model of the solar wind that takes into account turbulent heating of the wind by velocity and magnetic fluctuations as well as a variety of effects produced by interstellar pickup protons. The interstellar pickup protons are treated in the model as one fluid and the protons and electrons are treated together as a second fluid. The model equations include a Reynolds decomposition of the plasma velocity and magnetic field into mean and fluctuating quantities, as well as energy transfer from interstellar pickup protons to solar wind protons that results in the deceleration of the solar wind. The model is used to simulate the global steady-state structure of the solar wind in the region from 0.3 to 100 AU. The simulation assumes that the background magnetic field on the Sun is either a dipole (aligned or tilted with respect to the solar rotation axis) or one that is deduced from solar magnetograms.

  12. Global Reference Atmospheric Models, Including Thermospheres, for Mars, Venus and Earth

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Keller, Vernon W.

    2006-01-01

    This document is the viewgraph slides of the presentation. Marshall Space Flight Center's Natural Environments Branch has developed Global Reference Atmospheric Models (GRAMs) for Mars, Venus, Earth, and other solar system destinations. Mars-GRAM has been widely used for engineering applications including systems design, performance analysis, and operations planning for aerobraking, entry descent and landing, and aerocapture. Preliminary results are presented, comparing Mars-GRAM with measurements from Mars Reconnaissance Orbiter (MRO) during its aerobraking in Mars thermosphere. Venus-GRAM is based on the Committee on Space Research (COSPAR) Venus International Reference Atmosphere (VIRA), and is suitable for similar engineering applications in the thermosphere or other altitude regions of the atmosphere of Venus. Until recently, the thermosphere in Earth-GRAM has been represented by the Marshall Engineering Thermosphere (MET) model. Earth-GRAM has recently been revised. In addition to including an updated version of MET, it now includes an option to use the Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar Extended Model (NRLMSISE-00) as an alternate thermospheric model. Some characteristics and results from Venus-GRAM and Earth-GRAM thermospheres are also presented.

  13. Improving weather predictability by including land-surface model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  14. Finite element modeling of contaminant transport in soils including the effect of chemical reactions.

    PubMed

    Javadi, A A; Al-Najjar, M M

    2007-05-17

    The movement of chemicals through soils to the groundwater is a major cause of degradation of water resources. In many cases, serious human and stock health implications are associated with this form of pollution. Recent studies have shown that the current models and methods are not able to adequately describe the leaching of nutrients through soils, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. Furthermore, the effect of chemical reactions on the fate and transport of contaminants is not included in many of the existing numerical models for contaminant transport. In this paper a numerical model is presented for simulation of the flow of water and air and contaminant transport through unsaturated soils with the main focus being on the effects of chemical reactions. The governing equations of miscible contaminant transport including advection, dispersion-diffusion and adsorption effects together with the effect of chemical reactions are presented. The mathematical framework and the numerical implementation of the model are described in detail. The model is validated by application to a number of test cases from the literature and is then applied to the simulation of a physical model test involving transport of contaminants in a block of soil with particular reference to the effects of chemical reactions. Comparison of the results of the numerical model with the experimental results shows that the model is capable of predicting the effects of chemical reactions with very high accuracy. The importance of consideration of the effects of chemical reactions is highlighted.

  15. Filling Gaps in the Acculturation Gap-Distress Model: Heritage Cultural Maintenance and Adjustment in Mexican-American Families.

    PubMed

    Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J

    2016-07-01

    The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment.

  16. Verification and adjustment of regional regression models for urban storm-runoff quality using data collected in Little Rock, Arkansas

    USGS Publications Warehouse

    Barks, C.S.

    1995-01-01

    Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of

  17. Divorce Stress and Adjustment Model: Locus of Control and Demographic Predictors.

    ERIC Educational Resources Information Center

    Barnet, Helen Smith

    This study depicts the divorce process over three time periods: predivorce decision phase, divorce proper, and postdivorce. Research has suggested that persons with a more internal locus of control experience less intense and shorter intervals of stress during the divorce proper and better postdivorce adjustment than do persons with a more…

  18. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  19. A Structural Equation Modeling Approach to the Study of Stress and Psychological Adjustment in Emerging Adults

    ERIC Educational Resources Information Center

    Asberg, Kia K.; Bowers, Clint; Renk, Kimberly; McKinney, Cliff

    2008-01-01

    Today's society puts constant demands on the time and resources of all individuals, with the resulting stress promoting a decline in psychological adjustment. Emerging adults are not exempt from this experience, with an alarming number reporting excessive levels of stress and stress-related problems. As a result, the present study addresses the…

  20. Transmission line model for strained quantum well lasers including carrier transport and carrier heating effects.

    PubMed

    Xia, Mingjun; Ghafouri-Shiraz, H

    2016-03-01

    This paper reports a new model for strained quantum well lasers, which are based on the quantum well transmission line modeling method where effects of both carrier transport and carrier heating have been included. We have applied this new model and studied the effect of carrier transport on the output waveform of a strained quantum well laser both in time and frequency domains. It has been found that the carrier transport increases the turn-on, turn-off delay times and damping of the quantum well laser transient response. Also, analysis in the frequency domain indicates that the carrier transport causes the output spectrum of the quantum well laser in steady state to exhibit a redshift which has a narrower bandwidth and lower magnitude. The simulation results of turning-on transients obtained by the proposed model are compared with those obtained by the rate equation laser model. The new model has also been used to study the effects of pump current spikes on the laser output waveforms properties, and it was found that the presence of current spikes causes (i) wavelength blueshift, (ii) larger bandwidth, and (iii) reduces the magnitude and decreases the side-lobe suppression ratio of the laser output spectrum. Analysis in both frequency and time domains confirms that the new proposed model can accurately predict the temporal and spectral behaviors of strained quantum well lasers. PMID:26974607

  1. Does including physiology improve species distribution model predictions of responses to recent climate change?

    PubMed

    Buckley, Lauren B; Waaser, Stephanie A; MacLean, Heidi J; Fox, Richard

    2011-12-01

    Thermal constraints on development are often invoked to predict insect distributions. These constraints tend to be characterized in species distribution models (SDMs) by calculating development time based on a constant lower development temperature (LDT). Here, we assessed whether species-specific estimates of LDT based on laboratory experiments can improve the ability of SDMs to predict the distribution shifts of six U.K. butterflies in response to recent climate warming. We find that species-specific and constant (5 degrees C) LDT degree-day models perform similarly at predicting distributions during the period of 1970-1982. However, when the models for the 1970-1982 period are projected to predict distributions in 1995-1999 and 2000-2004, species-specific LDT degree-day models modestly outperform constant LDT degree-day models. Our results suggest that, while including species-specific physiology in correlative models may enhance predictions of species' distribution responses to climate change, more detailed models may be needed to adequately account for interspecific physiological differences. PMID:22352161

  2. A structural model for the in vivo human cornea including collagen-swelling interaction.

    PubMed

    Cheng, Xi; Petsche, Steven J; Pinsky, Peter M

    2015-08-01

    A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. PMID:26156299

  3. A structural model for the in vivo human cornea including collagen-swelling interaction

    PubMed Central

    Cheng, Xi; Petsche, Steven J.; Pinsky, Peter M.

    2015-01-01

    A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. PMID:26156299

  4. Modeling of single char combustion, including CO oxidation in its boundary layer

    SciTech Connect

    Lee, C.H.; Longwell, J.P.; Sarofim, A.F.

    1994-10-25

    The combustion of a char particle can be divided into a transient phase where its temperature increases as it is heated by oxidation, and heat transfer from the surrounding gas to an approximately constant temperature stage where gas phase reaction is important and which consumes most of the carbon and an extinction stage caused by carbon burnout. In this work, separate models were developed for the transient heating where gas phase reactions were unimportant and for the steady temperature stage where gas phase reactions were treated in detail. The transient char combustion model incorporates intrinsic char surface production of CO and CO{sub 2}, internal pore diffusion and external mass and heat transfer. The model provides useful information for particle ignition, burning temperature profile, combustion time, and carbon consumption rate. A gas phase reaction model incorporating the full set of 28 elementary C/H/O reactions was developed. This model calculated the gas phase CO oxidation reaction in the boundary layer at particle temperatures of 1250 K and 2500 K by using the carbon consumption rate and the burning temperature at the pseudo-steady state calculated from the temperature profile model but the transient heating was not included. This gas phase model can predict the gas species, and the temperature distributions in the boundary layer, the CO{sub 2}/CO ratio, and the location of CO oxidation. A mechanistic heat and mass transfer model was added to the temperature profile model to predict combustion behavior in a fluidized bed. These models were applied to data from the fluidized combustion of Newlands coal char particles. 52 refs., 60 figs.

  5. Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2016-01-01

    The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.

  6. Producing High-Accuracy Lattice Models from Protein Atomic Coordinates Including Side Chains

    PubMed Central

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M.

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models. PMID:22934109

  7. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    PubMed

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models. PMID:22934109

  8. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    PubMed

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models.

  9. A reduced Iwan model that includes pinning for bolted joint mechanics

    DOE PAGES

    Brake, Matthew Robert

    2016-05-12

    Bolted are joints are prevalent in most assembled structures; however, predictive models for the behavior of these joints do not yet exist. Many calibrated models have been proposed to represent the stiffness and energy dissipation characteristics of a bolted joint. In particular, the Iwan model put forth by Segalman and later extended by Mignolet has been shown to be able to predict the response of a jointed structure over a range of excitations once calibrated at a nominal load. The Iwan model, however, is not widely adopted due to the high computational expense of implementing it in a numerical simulation.more » To address this, an analytical, closed form representation of the Iwan model is derived under the hypothesis that upon a load reversal, the distribution of friction elements within the interface resembles a scaled version of the original distribution of friction elements. In conclusion, the Iwan model is extended to include the pinning behavior inherent in a bolted joint.« less

  10. A full model for simulation of electrochemical cells including complex behavior

    NASA Astrophysics Data System (ADS)

    Esperilla, J. J.; Félez, J.; Romero, G.; Carretero, A.

    This communication presents a model of electrochemical cells developed in order to simulate their electrical, chemical and thermal behavior showing the differences when thermal effects are or not considered in the charge-discharge process. The work presented here has been applied to the particular case of the Pb,PbSO 4|H 2SO 4 (aq)|PbO 2,Pb cell, which forms the basis of the lead-acid batteries so widely used in the automotive industry and as traction batteries in electric or hybrid vehicles. Each half-cell is considered independently in the model. For each half-cell, in addition to the main electrode reaction, a secondary reaction is considered: the hydrogen evolution reaction in the negative electrode and the oxygen evolution reaction in the positive. The equilibrium potential is calculated with the Nernst equation, in which the activity coefficients are fitted to an exponential function using experimental data. On the other hand, the two main mechanisms that produce the overpotential are considered, that is the activation or charge transfer and the diffusion mechanisms. First, an isothermal model has been studied in order to show the behavior of the main phenomena. A more complex model has also been studied including thermal behavior. This model is very useful in the case of traction batteries in electric and hybrid vehicles where high current intensities appear. Some simulation results are also presented in order to show the accuracy of the proposed models.

  11. RELAP5-3D Code Includes ATHENA Features and Models

    SciTech Connect

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF{sub 6}, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  12. RELAP5-3D Code Includes Athena Features and Models

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  13. A Modelling Framework for Gene Regulatory Networks Including Transcription and Translation.

    PubMed

    Edwards, R; Machina, A; McGregor, G; van den Driessche, P

    2015-06-01

    Qualitative models of gene regulatory networks have generally considered transcription factors to regulate directly the expression of other transcription factors, without any intermediate variables. In fact, gene expression always involves transcription, which produces mRNA molecules, followed by translation, which produces protein molecules, which can then act as transcription factors for other genes (in some cases after post-transcriptional modifications). Suppressing these multiple steps implicitly assumes that the qualitative behaviour does not depend on them. Here we explore a class of expanded models that explicitly includes both transcription and translation, keeping track of both mRNA and protein concentrations. We mainly deal with regulation functions that are steep sigmoids or step functions, as is often done in protein-only models. We find that flow cannot be constrained to switching domains, though there can still be asymptotic approach to singular stationary points (fixed points in the vicinity of switching thresholds). This avoids the thorny issue of singular flow, but leads to somewhat more complicated possibilities for flow between threshold crossings. In the infinitely fast limit of either mRNA or protein rates, we find that solutions converge uniformly to solutions of the corresponding protein-only model on arbitrary finite time intervals. This leaves open the possibility that the limit system (with one type of variable infinitely fast) may have different asymptotic behaviour, and indeed, we find an example in which stability of a fixed point in the protein-only model is lost in the expanded model. Our results thus show that including mRNA as a variable may change the behaviour of solutions. PMID:25758753

  14. A systematic review of the impact of including both waist and hip circumference in risk models for cardiovascular diseases, diabetes and mortality.

    PubMed

    Cameron, A J; Magliano, D J; Söderberg, S

    2013-01-01

    Both a larger waist and narrow hips are associated with heightened risk of diabetes, cardiovascular diseases and premature mortality. We review the risk of these outcomes for levels of waist and hip circumferences when terms for both anthropometric measures were included in regression models. MEDLINE and EMBASE were searched (last updated July 2012) for studies reporting the association with the outcomes mentioned earlier for both waist and hip circumferences (unadjusted and with both terms included in the model). Ten studies reported the association between hip circumference and death and/or disease outcomes both unadjusted and adjusted for waist circumference. Five studies reported the risk associated with waist circumference both unadjusted and adjusted for hip circumference. With the exception of one study of venous thromboembolism, the full strength of the association between either waist circumference or hip circumference with morbidity and/or mortality was only apparent when terms for both anthropometric measures were included in regression models. Without accounting for the protective effect of hip circumference, the effect of obesity on risk of death and disease may be seriously underestimated. Considered together (but not as a ratio measure), waist and hip circumference may improve risk prediction models for cardiovascular disease and other outcomes.

  15. SPheno 3.1: extensions including flavour, CP-phases and models beyond the MSSM

    NASA Astrophysics Data System (ADS)

    Porod, W.; Staub, F.

    2012-11-01

    We describe recent extensions of the program SPhenoincluding flavour aspects, CP-phases, R-parity violation and low energy observables. In case of flavour mixing all masses of supersymmetric particles are calculated including the complete flavour structure and all possible CP-phases at the 1-loop level. We give details on implemented seesaw models, low energy observables and the corresponding extension of the SUSY Les Houches Accord. Moreover, we comment on the possibilities to include MSSM extensions in SPheno. Catalogue identifier: ADRV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRV_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154062 No. of bytes in distributed program, including test data, etc.: 1336037 Distribution format: tar.gz Programming language: Fortran95. Computer: PC running under Linux, should run in every Unix environment. Operating system: Linux, Unix. Classification: 11.6. Catalogue identifier of previous version: ADRV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 153(2003)275 Does the new version supersede the previous version?: Yes Nature of problem: The first issue is the determination of the masses and couplings of supersymmetric particles in various supersymmetric models, the R-parity conserved MSSM with generation mixing and including CP-violating phases, various seesaw extensions of the MSSM and the MSSM with bilinear R-parity breaking. Low energy data on Standard Model fermion masses, gauge couplings and electroweak gauge boson masses serve as constraints. Radiative corrections from supersymmetric particles to these inputs must be calculated. Theoretical constraints on the soft SUSY breaking parameters from a high scale theory are imposed and the parameters at the electroweak scale are obtained from the

  16. Include dispersion in quantum chemical modeling of enzymatic reactions: the case of isoaspartyl dipeptidase.

    PubMed

    Zhang, Hai-Mei; Chen, Shi-Lu

    2015-06-01

    The lack of dispersion in the B3LYP functional has been proposed to be the main origin of big errors in quantum chemical modeling of a few enzymes and transition metal complexes. In this work, the essential dispersion effects that affect quantum chemical modeling are investigated. With binuclear zinc isoaspartyl dipeptidase (IAD) as an example, dispersion is included in the modeling of enzymatic reactions by two different procedures, i.e., (i) geometry optimizations followed by single-point calculations of dispersion (approach I) and (ii) the inclusion of dispersion throughout geometry optimization and energy evaluation (approach II). Based on a 169-atom chemical model, the calculations show a qualitative consistency between approaches I and II in energetics and most key geometries, demonstrating that both approaches are available with the latter preferential since both geometry and energy are dispersion-corrected in approach II. When a smaller model without Arg233 (147 atoms) was used, an inconsistency was observed, indicating that the missing dispersion interactions are essentially responsible for determining equilibrium geometries. Other technical issues and mechanistic characteristics of IAD are also discussed, in particular with respect to the effects of Arg233.

  17. Sensitivity of an atmospheric photochemistry model to chlorine perturbations including consideration of uncertainty propagation

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.; Douglass, A. R.

    1986-01-01

    Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.

  18. Multiple tail models including inverse measures for structural design under uncertainties

    NASA Astrophysics Data System (ADS)

    Ramu, Palaniappan

    Sampling-based reliability estimation with expensive computer models may be computationally prohibitive due to a large number of required simulations. One way to alleviate the computational expense is to extrapolate reliability estimates from observed levels to unobserved levels. Classical tail modeling techniques provide a class of models to enable this extrapolation using asymptotic theory by approximating the tail region of the cumulative distribution function (CDF). This work proposes three alternate tail extrapolation techniques including inverse measures that can complement classical tail modeling. The proposed approach, multiple tail models, applies the two classical and three alternate extrapolation techniques simultaneously to estimate inverse measures at the extrapolation regions and use the median as the best estimate. It is observed that the range of the five estimates can be used as a good approximation of the error associated with the median estimate. Accuracy and computational efficiency are competing factors in selecting sample size. Yet, as our numerical studies reveal, the accuracy lost to the reduction of computational power is very small in the proposed method. The method is demonstrated on standard statistical distributions and complex engineering examples.

  19. Model for the catalytic oxidation of CO, including gas-phase impurities and CO desorption

    NASA Astrophysics Data System (ADS)

    Buendía, G. M.; Rikvold, P. A.

    2013-07-01

    We present results of kinetic Monte Carlo simulations of a modified Ziff-Gulari-Barshad model for the reaction CO+O → CO2 on a catalytic surface. Our model includes impurities in the gas phase, CO desorption, and a modification known to eliminate the unphysical O poisoned phase. The impurities can adsorb and desorb on the surface, but otherwise remain inert. In a previous work that did not include CO desorption [Buendía and Rikvold, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.031143 85, 031143 (2012)], we found that the impurities have very distinctive effects on the phase diagram and greatly diminish the reactivity of the system. If the impurities do not desorb, once the system reaches a stationary state, the CO2 production disappears. When the impurities are allowed to desorb, there are regions where the CO2 reaction window reappears, although greatly reduced. Following experimental evidence that indicates that temperature effects are crucial in many catalytic processes, here we further analyze these effects by including a CO desorption rate. We find that the CO desorption has the effect to smooth the transition between the reactive and the CO rich phase, and most importantly it can counteract the negative effects of the presence of impurities by widening the reactive window such that now the system remains catalytically active in the whole range of CO pressures.

  20. General hypothesis and shell model for the synthesis of semiconductor nanotubes, including carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Mohammad, S. Noor

    2010-09-01

    Semiconductor nanotubes, including carbon nanotubes, have vast potential for new technology development. The fundamental physics and growth kinetics of these nanotubes are still obscured. Various models developed to elucidate the growth suffer from limited applicability. An in-depth investigation of the fundamentals of nanotube growth has, therefore, been carried out. For this investigation, various features of nanotube growth, and the role of the foreign element catalytic agent (FECA) in this growth, have been considered. Observed growth anomalies have been analyzed. Based on this analysis, a new shell model and a general hypothesis have been proposed for the growth. The essential element of the shell model is the seed generated from segregation during growth. The seed structure has been defined, and the formation of droplet from this seed has been described. A modified definition of the droplet exhibiting adhesive properties has also been presented. Various characteristics of the droplet, required for alignment and organization of atoms into tubular forms, have been discussed. Employing the shell model, plausible scenarios for the formation of carbon nanotubes, and the variation in the characteristics of these carbon nanotubes have been articulated. The experimental evidences, for example, for the formation of shell around a core, dipole characteristics of the seed, and the existence of nanopores in the seed, have been presented. They appear to justify the validity of the proposed model. The diversities of nanotube characteristics, fundamentals underlying the creation of bamboo-shaped carbon nanotubes, and the impurity generation on the surface of carbon nanotubes have been elucidated. The catalytic action of FECA on growth has been quantified. The applicability of the proposed model to the nanotube growth by a variety of mechanisms has been elaborated. These mechanisms include the vapor-liquid-solid mechanism, the oxide-assisted growth mechanism, the self

  1. Standardized Competencies for Parenteral Nutrition Order Review and Parenteral Nutrition Preparation, Including Compounding: The ASPEN Model.

    PubMed

    Boullata, Joseph I; Holcombe, Beverly; Sacks, Gordon; Gervasio, Jane; Adams, Stephen C; Christensen, Michael; Durfee, Sharon; Ayers, Phil; Marshall, Neil; Guenter, Peggi

    2016-08-01

    Parenteral nutrition (PN) is a high-alert medication with a complex drug use process. Key steps in the process include the review of each PN prescription followed by the preparation of the formulation. The preparation step includes compounding the PN or activating a standardized commercially available PN product. The verification and review, as well as preparation of this complex therapy, require competency that may be determined by using a standardized process for pharmacists and for pharmacy technicians involved with PN. An American Society for Parenteral and Enteral Nutrition (ASPEN) standardized model for PN order review and PN preparation competencies is proposed based on a competency framework, the ASPEN-published interdisciplinary core competencies, safe practice recommendations, and clinical guidelines, and is intended for institutions and agencies to use with their staff.

  2. Standardized Competencies for Parenteral Nutrition Order Review and Parenteral Nutrition Preparation, Including Compounding: The ASPEN Model.

    PubMed

    Boullata, Joseph I; Holcombe, Beverly; Sacks, Gordon; Gervasio, Jane; Adams, Stephen C; Christensen, Michael; Durfee, Sharon; Ayers, Phil; Marshall, Neil; Guenter, Peggi

    2016-08-01

    Parenteral nutrition (PN) is a high-alert medication with a complex drug use process. Key steps in the process include the review of each PN prescription followed by the preparation of the formulation. The preparation step includes compounding the PN or activating a standardized commercially available PN product. The verification and review, as well as preparation of this complex therapy, require competency that may be determined by using a standardized process for pharmacists and for pharmacy technicians involved with PN. An American Society for Parenteral and Enteral Nutrition (ASPEN) standardized model for PN order review and PN preparation competencies is proposed based on a competency framework, the ASPEN-published interdisciplinary core competencies, safe practice recommendations, and clinical guidelines, and is intended for institutions and agencies to use with their staff. PMID:27317615

  3. Dynamic modelling and response characteristics of a magnetic bearing rotor system including auxiliary bearings

    NASA Technical Reports Server (NTRS)

    Free, April M.; Flowers, George T.; Trent, Victor S.

    1993-01-01

    Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotor-dynamic model and assess the dynamic behavior of a magnetic bearing rotor system which includes the effects of auxiliary bearings. Of particular interest is the effects of introducing sideloading into such a system during failure of the magnetic bearing. A model is developed from an experimental test facility and a number of simulation studies are performed. These results are presented and discussed.

  4. A model for Huanglongbing spread between citrus plants including delay times and human intervention

    NASA Astrophysics Data System (ADS)

    Vilamiu, Raphael G. d'A.; Ternes, Sonia; Braga, Guilherme A.; Laranjeira, Francisco F.

    2012-09-01

    The objective of this work was to present a compartmental deterministic mathematical model for representing the dynamics of HLB disease in a citrus orchard, including delay in the disease's incubation phase in the plants, and a delay period on the nymphal stage of Diaphorina citri, the most important HLB insect vector in Brazil. Numerical simulations were performed to assess the possible impacts of human detection efficiency of symptomatic plants, as well as the influence of a long incubation period of HLB in the plant.

  5. DEVELOPMENT OF A PRODUCT MODEL FOR CUT-AND-COVER TUNNELS INCLUDING DEGRADATIONS

    NASA Astrophysics Data System (ADS)

    Aruga, Takashi; Yabuki, Nobuyoshi; Arai, Yasushi

    Cut-and-Cover tunnels are constructed on site. The various conditions of environments and techniques of construction make a significant influence on the quality of the tunnel. It is extremely difficult to rebuild the tunnel even if a structural trouble is found once the construction is completed. Thus, suitable maintenance is needed to ensure the tunnel is in a healthy condition. To execute better maintenance, the information on design and construction of the tunnel is vital for inspection of degradation, estimation of occurrence factors and planning of repair or refurbishing works. In this paper, we developed a product model for representing cut-and-cover tunnels including degradations for effective information use in maintenance work. As its first step, we investigated the characteristics of cut-and-cover tunnels and degradations about reinforced concrete members and developed a conceptual model. Then, we implemented the conceptual product model by expanding Industry Foundation Classes (IFC). Finally, we verified the product model by applying it to a simple tunnel.

  6. A Mercury orientation model including non-zero obliquity and librations

    NASA Astrophysics Data System (ADS)

    Margot, Jean-Luc

    2009-12-01

    Planetary orientation models describe the orientation of the spin axis and prime meridian of planets in inertial space as a function of time. The models are required for the planning and execution of Earth-based or space-based observational work, e.g. to compute viewing geometries and to tie observations to planetary coordinate systems. The current orientation model for Mercury is inadequate because it uses an obsolete spin orientation, neglects oscillations in the spin rate called longitude librations, and relies on a prime meridian that no longer reflects its intended dynamical significance. These effects result in positional errors on the surface of ~1.5 km in latitude and up to several km in longitude, about two orders of magnitude larger than the finest image resolution currently attainable. Here we present an updated orientation model which incorporates modern values of the spin orientation, includes a formulation for longitude librations, and restores the dynamical significance to the prime meridian. We also use modern values of the orbit normal, spin axis orientation, and precession rates to quantify an important relationship between the obliquity and moment of inertia differences.

  7. Development and Application of a Nonbonded Cu2+ Model That Includes the Jahn–Teller Effect

    PubMed Central

    2015-01-01

    Metal ions are both ubiquitous to and crucial in biology. In classical simulations, they are typically described as simple van der Waals spheres, making it difficult to provide reliable force field descriptions for them. An alternative is given by nonbonded dummy models, in which the central metal atom is surrounded by dummy particles that each carry a partial charge. While such dummy models already exist for other metal ions, none is available yet for Cu2+ because of the challenge to reproduce the Jahn–Teller distortion. This challenge is addressed in the current study, where, for the first time, a dummy model including a Jahn–Teller effect is developed for Cu2+. We successfully validate its usefulness by studying metal binding in two biological systems: the amyloid-β peptide and the mixed-metal enzyme superoxide dismutase. We believe that our parameters will be of significant value for the computational study of Cu2+-dependent biological systems using classical models. PMID:26167255

  8. Habitability of super-Earth planets around other suns: models including Red Giant Branch evolution.

    PubMed

    von Bloh, W; Cuntz, M; Schröder, K-P; Bounama, C; Franck, S

    2009-01-01

    The unexpected diversity of exoplanets includes a growing number of super-Earth planets, i.e., exoplanets with masses of up to several Earth masses and a similar chemical and mineralogical composition as Earth. We present a thermal evolution model for a 10 Earth-mass planet orbiting a star like the Sun. Our model is based on the integrated system approach, which describes the photosynthetic biomass production and takes into account a variety of climatological, biogeochemical, and geodynamical processes. This allows us to identify a so-called photosynthesis-sustaining habitable zone (pHZ), as determined by the limits of biological productivity on the planetary surface. Our model considers solar evolution during the main-sequence stage and along the Red Giant Branch as described by the most recent solar model. We obtain a large set of solutions consistent with the principal possibility of life. The highest likelihood of habitability is found for "water worlds." Only mass-rich water worlds are able to realize pHZ-type habitability beyond the stellar main sequence on the Red Giant Branch.

  9. Habitability of super-Earth planets around other suns: models including Red Giant Branch evolution.

    PubMed

    von Bloh, W; Cuntz, M; Schröder, K-P; Bounama, C; Franck, S

    2009-01-01

    The unexpected diversity of exoplanets includes a growing number of super-Earth planets, i.e., exoplanets with masses of up to several Earth masses and a similar chemical and mineralogical composition as Earth. We present a thermal evolution model for a 10 Earth-mass planet orbiting a star like the Sun. Our model is based on the integrated system approach, which describes the photosynthetic biomass production and takes into account a variety of climatological, biogeochemical, and geodynamical processes. This allows us to identify a so-called photosynthesis-sustaining habitable zone (pHZ), as determined by the limits of biological productivity on the planetary surface. Our model considers solar evolution during the main-sequence stage and along the Red Giant Branch as described by the most recent solar model. We obtain a large set of solutions consistent with the principal possibility of life. The highest likelihood of habitability is found for "water worlds." Only mass-rich water worlds are able to realize pHZ-type habitability beyond the stellar main sequence on the Red Giant Branch. PMID:19630504

  10. A laboratory model of the aortic root flow including the coronary arteries

    NASA Astrophysics Data System (ADS)

    Querzoli, Giorgio; Fortini, Stefania; Espa, Stefania; Melchionna, Simone

    2016-08-01

    Cardiovascular flows have been extensively investigated by means of in vitro models to assess the prosthetic valve performances and to provide insight into the fluid dynamics of the heart and proximal aorta. In particular, the models for the study of the flow past the aortic valve have been continuously improved by including, among other things, the compliance of the vessel and more realistic geometries. The flow within the sinuses of Valsalva is known to play a fundamental role in the dynamics of the aortic valve since they host a recirculation region that interacts with the leaflets. The coronary arteries originate from the ostia located within two of the three sinuses, and their presence may significantly affect the fluid dynamics of the aortic root. In spite of their importance, to the extent of the authors' knowledge, coronary arteries were not included so far when modeling in vitro the transvalvular aortic flow. We present a pulse duplicator consisting of a passively pulsing ventricle, a compliant proximal aorta, and coronary arteries connected to the sinuses of Valsalva. The coronary flow is modulated by a self-regulating device mimicking the physiological mechanism, which is based on the contraction and relaxation of the heart muscle during the cardiac cycle. Results show that the model reproduces satisfyingly the coronary flow. The analysis of the time evolution of the velocity and vorticity fields within the aortic root reveals the main characteristics of the backflow generated through the aorta in order to feed the coronaries during the diastole. Experiments without coronary flow have been run for comparison. Interestingly, the lifetime of the vortex forming in the sinus of Valsalva during the systole is reduced by the presence of the coronaries. As a matter of fact, at the end of the systole, that vortex is washed out because of the suction generated by the coronary flow. Correspondingly, the valve closure is delayed and faster compared to the case with

  11. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  12. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  13. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS

    NASA Astrophysics Data System (ADS)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2010-12-01

    With 4 million ha currently grown for ethanol in Brazil only, approximately half the global bioethanol production in 2005 (Smeets 2008), and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Indeed, ethanol made from biomass is currently the most widespread option for alternative transportation fuels. It was originally promoted as a carbon neutral energy resource that could bring energy independence to countries and local opportunities to farmers, until attention was drawn to its environmental and socio-economical drawbacks. It is still not clear to which extent it is a solution or a contributor to climate change mitigation. Dynamic Global Vegetation models can help address these issues and quantify the potential impacts of biofuels on ecosystems at scales ranging from on-site to global. The global agro-ecosystem model ORCHIDEE describes water, carbon and energy exchanges at the soil-atmosphere interface for a limited number of natural and agricultural vegetation types. In order to integrate agricultural management to the simulations and to capture more accurately the specificity of crops' phenology, ORCHIDEE has been coupled with the agronomical model STICS. The resulting crop-oriented vegetation model ORCHIDEE-STICS has been used so far to simulate temperate crops such as wheat, corn and soybean. As a generic ecosystem model, each grid cell can include several vegetation types with their own phenology and management practices, making it suitable to spatial simulations. Here, ORCHIDEE-STICS is altered to include sugar cane as a new agricultural Plant functional Type, implemented and parametrized using the STICS approach. An on-site calibration and validation is then performed based on biomass and flux chamber measurements in several sites in Australia and variables such as LAI, dry weight, heat fluxes and respiration are used to evaluate the ability of the model to simulate the specific

  14. Increasing shape modelling accuracy by adjusting for subject positioning: an application to the analysis of radiographic proximal femur symmetry using data from the Osteoarthritis Initiative.

    PubMed

    Lindner, C; Wallis, G A; Cootes, T F

    2014-04-01

    In total hip arthroplasty, the shape of the contra-lateral femur frequently serves as a template for preoperative planning. Previous research on contra-lateral femoral symmetry has been based on conventional hip geometric measurements (which reduce shape to a series of linear measurements) and did not take the effect of subject positioning on radiographic femur shape into account. The aim of this study was to analyse proximal femur symmetry based on statistical shape models (SSMs) which quantify global femoral shape while also adjusting for differences in subject positioning during image acquisition. We applied our recently developed fully automatic shape model matching (FASMM) system to automatically segment the proximal femur from AP pelvic radiographs to generate SSMs of the proximal femurs of 1258 Caucasian females (mean age: 61.3 SD=9.0). We used a combined SSM (capturing the left and right femurs) to identify and adjust for shape variation attributable to subject positioning as well as a single SSM (including all femurs as left femurs) to analyse proximal femur symmetry. We also calculated conventional hip geometric measurements (head diameter, neck width, shaft width and neck-shaft angle) using the output of the FASMM system. The combined SSM revealed two modes that were clearly attributable to subject positioning. The average difference (mean point-to-curve distance) between left and right femur shape was 1.0mm before and 0.8mm after adjusting for these two modes. The automatic calculation of conventional hip geometric measurements after adjustment gave an average absolute percent asymmetry of within 3.1% and an average absolute difference of within 1.1mm or 2.9° for all measurements. We conclude that (i) for Caucasian females the global shape of the right and left proximal femurs is symmetric without isolated locations of asymmetry; (ii) a combined left-right SSM can be used to adjust for radiographic shape variation due to subject positioning; and (iii

  15. An ecosystem model of the global ocean including Fe, Si, P colimitations

    NASA Astrophysics Data System (ADS)

    Aumont, Olivier; Maier-Reimer, Ernst; Blain, StéPhane; Monfray, P.

    2003-06-01

    Observations have shown that large areas of the world ocean are characterized by lower than expected chlorophyll concentrations given the ambient phosphate and nitrate levels. In these High Nutrient-Low Chlorophyll regions, limitations of phytoplankton growth by other nutrients like silicate or iron have been hypothesized and further evidenced by in situ experiments. To explore these limitations, a nine-component ecosystem model has been embedded in the Hamburg model of the oceanic carbon cycle (HAMOCC5). This model includes phosphate, silicate, dissolved iron, two phytoplankton size fractions (nanophytoplankton and diatoms), two zooplankton size fractions (microzooplankton and mesozooplankton), one detritus and semilabile dissolved organic matter. The model is able to reproduce the main characteristics of two of the three main HNLC areas, i.e., the Southern Ocean and the equatorial Pacific. In the subarctic Pacific, silicate and phosphate surface concentrations are largely underestimated because of deficiencies in ocean dynamics. The low chlorophyll concentrations in HNLC areas are explained by the traditional hypothesis of a simultaneous iron-grazing limitation: Diatoms are limited by iron whereas nanophytoplankton is controlled by very efficient grazing by microzooplankton. Phytoplankton assimilates 18 × 109 mol Fe yr-1 of which 73% is supplied by regeneration within the euphotic zone. The model predicts that the ocean carries with it about 75% of the phytoplankton demand for new iron, assuming a 1% solubility for atmospheric iron. Finally, it is shown that a higher supply of iron to surface water leads to a higher export production but paradoxically to a lower primary productivity.

  16. Distinguishing sediment waves from slope failure deposits: Field examples, including the 'humboldt slide', and modelling results

    USGS Publications Warehouse

    Lee, H.J.; Syvitski, J.P.M.; Parker, G.; Orange, Daniel L.; Locat, J.; Hutton, E.W.H.; Imran, J.

    2002-01-01

    Migrating sediment waves have been reported in a variety of marine settings, including submarine levee-fan systems, floors of fjords, and other basin or continental slope environments. Examination of such wave fields reveals nine diagnostic characteristics. When these characteristics are applied to several features previously attributed to submarine landslide deformation, they suggest that the features should most likely be reinterpreted as migrating sediment-wave fields. Sites that have been reinterpreted include the 'Humboldt slide' on the Eel River margin in northern California, the continental slope in the Gulf of Cadiz, the continental shelf off the Malaspina Glacier in the Gulf of Alaska, and the Adriatic shelf. A reassessment of all four features strongly suggests that numerous turbidity currents, separated by intervals of ambient hemipelagic sedimentation, deposited the wave fields over thousands of years. A numerical model of hyperpycnal discharge from the Eel River, for example, shows that under certain alongshore-current conditions, such events can produce turbidity currents that flow across the 'Humboldt slide', serving as the mechanism for the development of migrating sediment waves. Numerical experiments also demonstrate that where a series of turbidity currents flows across a rough seafloor (i.e. numerical steps), sediment waves can form and migrate upslope. Hemipelagic sedimentation between turbidity current events further facilitates the upslope migration of the sediment waves. Physical modelling of turbidity currents also confirms the formation and migration of seafloor bedforms. The morphologies of sediment waves generated both numerically and physically in the laboratory bear a strong resemblance to those observed in the field, including those that were previously described as submarine landslides.

  17. A coupled general circulation model for the Late Jurassic including fully interactive carbon cycling

    NASA Astrophysics Data System (ADS)

    Williams, J.; Valdes, P. J.; Leith, T. L.; Sagoo, N.

    2011-12-01

    The climatology of a coupled atmosphere - ocean (including sea ice) general circulation model for the Late Jurassic epoch (Kimmeridgian stage) is presented. The simulation framework used is the FAMOUS climate model [Jones et al, Climate Dynamics 25, 189-204 (2005)], which is a reduced resolution configuration of the UK Met Office model HadCM3 [Pope et al, Climate Dynamics 16, 123-46 (2000)]. In order to enable computation of carbon fluxes through the Earth System, fully interactive terrestrial and oceanic carbon cycle modules are added to FAMOUS. These include temporally evolving vegetation on land and populations of zooplankton, phytoplankton and nitrogenous nutrients in the ocean. The Kimmeridgian was a time of significantly enhanced carbon dioxide concentrations in the atmosphere (roughly four times preindustrial) and as such is a useful test bed for "paleocalibration" of a future climate perturbed by anthropogenic emissions of greenhouse gases [Barron et al, Paleoceanography 10 (5) 953-962 (1995) for example]. From a geological perspective, the Kimmeridgian was also a time of significant laying down of hydrocarbon reserves (particularly in the North Sea) and thus the inclusion of a fully interactive carbon cycle in FAMOUS enables the study of the dysoxic (low oxygen) and circulatory conditions relevant to their formation and preservation. The parameter space of both the terrestrial and oceanic carbon cycles was explored using the Latin Hypercube method [Mckay, Proceedings of the 24th conference on winter simulation, ACM Press, Arlington, Virginia, 57-564 (1992)], which enables efficient yet rigorous sampling of multiple covarying parameters. These parameters were validated using present day observations of meteorological, vegetative and biological parameters since the data available for the Jurassic itself is relatively scarce. To remove subjective bias in the validation process, the "Arcsine Mielke" skill score was used [Watterson, Int. J. Climatology, 16, 379

  18. Jet Noise Modeling for Coannular Nozzles Including the Effects of Chevrons

    NASA Technical Reports Server (NTRS)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.

    2003-01-01

    Development of good predictive models for jet noise has always been plagued by the difficulty in obtaining good quality data over a wide range of conditions in different facilities.We consider such issues very carefully in selecting data to be used in developing our model. Flight effects are of critical importance, and none of the means of determining them are without significant problems. Free-jet flight simulation facilities are very useful, and can provide meaningful data so long as they can be analytically transformed to the flight frame of reference. In this report we show that different methodologies used by NASA and industry to perform this transformation produce very different results, especially in the rear quadrant; this compels us to rely largely on static data to develop our model, but we show reasonable agreement with simulated flight data when these transformation issues are considered. A persistent problem in obtaining good quality data is noise generated in the experimental facility upstream of the test nozzle: valves, elbows, obstructions, and especially the combustor can contribute significant noise, and much of this noise is of a broadband nature, easily confused with jet noise. Muffling of these sources is costly in terms of size as well as expense, and it is particularly difficult in flight simulation facilities, where compactness of hardware is very important, as discussed by Viswanathan (Ref. 13). We feel that the effects of jet density on jet mixing noise may have been somewhat obscured by these problems, leading to the variable density exponent used in most jet noise prediction procedures including our own. We investigate this issue, applying Occam s razor, (e.g., Ref. 14), in a search for the simplest physically meaningful model that adequately describes the observed phenomena. In a similar vein, we see no reason to reject the Lighthill approach; it provides a very solid basis upon which to build a predictive procedure, as we believe we

  19. Analytical model for radiative transfer including the effects of a rough material interface.

    PubMed

    Giddings, Thomas E; Kellems, Anthony R

    2016-08-20

    The reflected and transmitted radiance due to a source located above a water surface is computed based on models for radiative transfer in continuous optical media separated by a discontinuous air-water interface with random surface roughness. The air-water interface is described as the superposition of random, unresolved roughness on a deterministic realization of a stochastic wave surface at resolved scales. Under the geometric optics assumption, the bidirectional reflection and transmission functions for the air-water interface are approximated by applying regular perturbation methods to Snell's law and including the effects of a random surface roughness component. Formal analytical solutions to the radiative transfer problem under the small-angle scattering approximation account for the effects of scattering and absorption as light propagates through the atmosphere and water and also capture the diffusive effects due to the interaction of light with the rough material interface that separates the two optical media. Results of the analytical models are validated against Monte Carlo simulations, and the approximation to the bidirectional reflection function is also compared to another well-known analytical model.

  20. Analytical model for radiative transfer including the effects of a rough material interface.

    PubMed

    Giddings, Thomas E; Kellems, Anthony R

    2016-08-20

    The reflected and transmitted radiance due to a source located above a water surface is computed based on models for radiative transfer in continuous optical media separated by a discontinuous air-water interface with random surface roughness. The air-water interface is described as the superposition of random, unresolved roughness on a deterministic realization of a stochastic wave surface at resolved scales. Under the geometric optics assumption, the bidirectional reflection and transmission functions for the air-water interface are approximated by applying regular perturbation methods to Snell's law and including the effects of a random surface roughness component. Formal analytical solutions to the radiative transfer problem under the small-angle scattering approximation account for the effects of scattering and absorption as light propagates through the atmosphere and water and also capture the diffusive effects due to the interaction of light with the rough material interface that separates the two optical media. Results of the analytical models are validated against Monte Carlo simulations, and the approximation to the bidirectional reflection function is also compared to another well-known analytical model. PMID:27556978

  1. A phase-field model for incoherent martensitic transformations including plastic accommodation processes in the austenite

    NASA Astrophysics Data System (ADS)

    Kundin, J.; Raabe, D.; Emmerich, H.

    2011-10-01

    If alloys undergo an incoherent martensitic transformation, then plastic accommodation and relaxation accompany the transformation. To capture these mechanisms we develop an improved 3D microelastic-plastic phase-field model. It is based on the classical concepts of phase-field modeling of microelastic problems (Chen, L.Q., Wang Y., Khachaturyan, A.G., 1992. Philos. Mag. Lett. 65, 15-23). In addition to these it takes into account the incoherent formation of accommodation dislocations in the austenitic matrix, as well as their inheritance into the martensitic plates based on the crystallography of the martensitic transformation. We apply this new phase-field approach to the butterfly-type martensitic transformation in a Fe-30 wt%Ni alloy in direct comparison to recent experimental data (Sato, H., Zaefferer, S., 2009. Acta Mater. 57, 1931-1937). It is shown that the therein proposed mechanisms of plastic accommodation during the transformation can indeed explain the experimentally observed morphology of the martensitic plates as well as the orientation between martensitic plates and the austenitic matrix. The developed phase-field model constitutes a general simulations approach for different kinds of phase transformation phenomena that inherently include dislocation based accommodation processes. The approach does not only predict the final equilibrium topology, misfit, size, crystallography, and aspect ratio of martensite-austenite ensembles resulting from a transformation, but it also resolves the associated dislocation dynamics and the distribution, and the size of the crystals itself.

  2. A limit-cycle model of leg movements in cross-country skiing and its adjustments with fatigue.

    PubMed

    Cignetti, F; Schena, F; Mottet, D; Rouard, A

    2010-08-01

    Using dynamical modeling tools, the aim of the study was to establish a minimal model reproducing leg movements in cross-country skiing, and to evaluate the eventual adjustments of this model with fatigue. The participants (N=8) skied on a treadmill at 90% of their maximal oxygen consumption, up to exhaustion, using the diagonal stride technique. Qualitative analysis of leg kinematics portrayed in phase planes, Hooke planes, and velocity profiles suggested the inclusion in the model of a linear stiffness and an asymmetric van der Pol-type nonlinear damping. Quantitative analysis revealed that this model reproduced the observed kinematics patterns of the leg with adequacy, accounting for 87% of the variance. A rising influence of the stiffness term and a dropping influence of the damping terms were also evidenced with fatigue. The meaning of these changes was discussed in the framework of motor control.

  3. Modelling and control of a microgrid including photovoltaic and wind generation

    NASA Astrophysics Data System (ADS)

    Hussain, Mohammed Touseef

    Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.

  4. Three-layer model for the surface second-harmonic generation yield including multiple reflections

    NASA Astrophysics Data System (ADS)

    Anderson, Sean M.; Mendoza, Bernardo S.

    2016-09-01

    We present the three-layer model to calculate the surface second-harmonic generation (SSHG) yield. This model considers that the surface is represented by three regions or layers. The first layer is the vacuum region with a dielectric function ɛv(ω ) =1 from where the fundamental electric field impinges on the material. The second layer is a thin layer (ℓ ) of thickness d characterized by a dielectric function ɛℓ(ω ) , and it is in this layer where the SSHG takes place. The third layer is the bulk region denoted by b and characterized by ɛb(ω ) . Both the vacuum and bulk layers are semi-infinite. The model includes the multiple reflections of both the fundamental and the second-harmonic (SH) fields that take place at the thin layer ℓ . We obtain explicit expressions for the SSHG yield for the commonly used s and p polarizations of the incoming 1 ω and outgoing 2 ω electric fields, where no assumptions for the symmetry of the surface are made. These symmetry assumptions ultimately determine which components of the surface nonlinear second-order susceptibility tensor χ (-2 ω ;ω ,ω ) are different from zero, and thus contribute to the SSHG yield. Then, we particularize the results for the most commonly investigated surfaces, the (001), (110), and (111) crystallographic faces, taking their symmetries into account. We use the three-layer model and compare it against the experimental results of a Si(111)(1 ×1 ):H surface, as a test case, and use it to predict the SSHG yield of a Si(001)(2 ×1 ) surface.

  5. A model of force balance in Jupiter's magnetodisc including hot plasma pressure anisotropy

    NASA Astrophysics Data System (ADS)

    Nichols, J. D.; Achilleos, N.; Cowley, S. W. H.

    2015-12-01

    We present an iterative vector potential model of force balance in Jupiter's magnetodisc that includes the effects of hot plasma pressure anisotropy. The fiducial model produces results that are consistent with Galileo magnetic field and plasma data over the whole radial range of the model. The hot plasma pressure gradient and centrifugal forces dominate in the regions inward of ˜20 RJ and outward of ˜50 RJ, respectively, while for realistic values of the pressure anisotropy, the anisotropy current is either the dominant component or at least comparable with the hot plasma pressure gradient current in the region in between. With the inclusion of hot plasma pressure anisotropy, the ˜1.2 and ˜2.7° shifts in the latitudes of the main oval and Ganymede footprint, respectively, associated with variations over the observed range of the hot plasma parameter Kh, which is the product of hot pressure and unit flux tube volume, are comparable to the shifts observed in auroral images. However, the middle magnetosphere is susceptible to the firehose instability, with peak equatorial values of βh∥e-βh⊥e≃1 - 2, for Kh=2.0 - 2.5 × 107 Pa m T-1. For larger values of Kh,βh∥e-βh⊥e exceeds 2 near ˜25 RJ and the model does not converge. This suggests that small-scale plasmoid release or "drizzle" of iogenic plasma may often occur in the middle magnetosphere, thus forming a significant mode of plasma mass loss, alongside plasmoids, at Jupiter.

  6. A surplus production model including environmental effects: Application to the Senegalese white shrimp stocks

    NASA Astrophysics Data System (ADS)

    Thiaw, Modou; Gascuel, Didier; Jouffre, Didier; Thiaw, Omar Thiom

    2009-12-01

    In Senegal, two stocks of white shrimp ( Penaeusnotialis) are intensively exploited, one in the north and another in the south. We used surplus production models including environmental effects to analyse their changes in abundance over the past 10 years and to estimate their Maximum Sustainable Yield (MSY) and the related fishing effort ( EMSY). First, yearly abundance indices were estimated from commercial statistics using GLM techniques. Then, two environmental indices were alternatively tested in the model: the coastal upwelling intensity from wind speeds provided by the SeaWifs database and the primary production derived from satellite infrared images of chlorophyll a. Models were fitted, with or without the environmental effect, to the 1996-2005 time series. They express stock abundance and catches as functions of the fishing effort and the environmental index (when considered). For the northern stock, fishing effort and abundance fluctuate over the period without any clear trends. The model based on the upwelling index explains 64.9% of the year-to-year variability. It shows that the stock was slightly overexploited in 2002-2003 and is now close to full exploitation. Stock abundance strongly depends on environmental conditions; consequently, the MSY estimate varies from 300 to 900 tons according to the upwelling intensity. For the southern stock, fishing effort has strongly increased over the past 10 years, while abundance has been reduced 4-fold. The environment has a significant effect on abundance but only explains a small part of the year-to-year variability. The best fit is obtained using the primary production index ( R2 = 0.75), and the stock is now significantly overfished regardless of environmental conditions. MSY varies from 1200 to 1800 tons according to environmental conditions. Finally, in northern Senegal, the upwelling is highly variable from year to year and constitutes the major factor determining productivity. In the south, hydrodynamic

  7. Environmental assessment of biofuel chains based on ecosystem modelling, including land-use change effects

    NASA Astrophysics Data System (ADS)

    Gabrielle, B.; Gagnaire, N.; Massad, R.; Prieur, V.; Python, Y.

    2012-04-01

    The potential greenhouse gas (GHG) savings resulting from the displacement of fossil energy sources by bioenergy mostly hinges on the uncertainty on the magnitude of nitrous oxide (N2O) emissions from arable soils occuring during feedstock production. These emissions are broadly related to fertilizer nitrogen input rates, but largely controlled by soil and climate factors which makes their estimation highly uncertain. Here, we set out to improve estimates of N2O emissions from bioenergy feedstocks by using ecosystem models and measurements and modeling of atmospheric N2O in the greater Paris (France) area. Ground fluxes were measured in two locations to assess the effect of soil type and management, crop type (including lignocellulosics such as triticale, switchgrass and miscanthus), and climate on N2O emission rates and dynamics. High-resolution maps of N2O emissions were generated over the Ile-de-France region (around Paris) with two ecosystem models using geographical databases on soils, weather data, land-use and crop management. The models were tested against ground flux measurements and the emission maps were fed into the atmospheric chemistry-transport model CHIMERE. The maps were tested by comparing the CHIMERE simulations with time series of N2O concentrations measured at various heights above the ground in two locations in 2007. The emissions of N2O, as integrated over the region, were used in a life-cycle assessment of representative biofuel pathways: bioethanol from wheat and sugar-beet (1st generation), and miscanthus (2nd generation chain); bio-diesel from oilseed rape. Effects related to direct and indirect land-use changes (in particular on soil carbon stocks) were also included in the assessment based on various land-use scenarios and literature references. The potential deployment of miscanthus was simulated by assuming it would be grown on the current sugar-beet growing area in Ile-de-France, or by converting land currently under permanent fallow

  8. Parameter Estimation of Binary Neutron Stars using an Effective One Body Model including Tidal Interaction

    NASA Astrophysics Data System (ADS)

    Rizzo, Monica; O'Shaughnessy, Richard; Bernuzzi, Sebastiano; Lackey, Benjamin

    2016-03-01

    Ground gravitational wave detectors, built to detect perturbations in spacetime, can pick up signals produced by inspiraling binary neutron stars, the remnants of the core collapse of massive stars. A new EOB model (Bernuzzi et al. 2015) simulates the inspiral and merger of binary neutron star systems, including how they are deformed due to tides. We used a Bayesian parameter estimation algorithm to infer how well a plausible gravitational wave detection would allow us to constrain this tidal deformability. We then compared our results to prior investigations (Wade et al. 2014) which employed a post-Newtonian-based approximation for the inspiral. I would like to thank the RIT Department of Physics and Astronomy, and the RIT Center for Computational Relativity and Gravitation.

  9. Adjustable stiffness, external fixator for the rat femur osteotomy and segmental bone defect models.

    PubMed

    Glatt, Vaida; Matthys, Romano

    2014-01-01

    The mechanical environment around the healing of broken bone is very important as it determines the way the fracture will heal. Over the past decade there has been great clinical interest in improving bone healing by altering the mechanical environment through the fixation stability around the lesion. One constraint of preclinical animal research in this area is the lack of experimental control over the local mechanical environment within a large segmental defect as well as osteotomies as they heal. In this paper we report on the design and use of an external fixator to study the healing of large segmental bone defects or osteotomies. This device not only allows for controlled axial stiffness on the bone lesion as it heals, but it also enables the change of stiffness during the healing process in vivo. The conducted experiments have shown that the fixators were able to maintain a 5 mm femoral defect gap in rats in vivo during unrestricted cage activity for at least 8 weeks. Likewise, we observed no distortion or infections, including pin infections during the entire healing period. These results demonstrate that our newly developed external fixator was able to achieve reproducible and standardized stabilization, and the alteration of the mechanical environment of in vivo rat large bone defects and various size osteotomies. This confirms that the external fixation device is well suited for preclinical research investigations using a rat model in the field of bone regeneration and repair. PMID:25350129

  10. Three-dimensional finite difference viscoelastic wave modelling including surface topography

    NASA Astrophysics Data System (ADS)

    Hestholm, Stig

    1999-12-01

    I have undertaken 3-D finite difference (FD) modelling of seismic scattering fromfree-surface topography. Exact free-surface boundary conditions for arbitrary 3-D topographies have been derived for the particle velocities. The boundary conditions are combined with a velocity-stress formulation of the full viscoelastic wave equations. A curved grid represents the physical medium and its upper boundary represents the free-surface topography. The wave equations are numerically discretized by an eighth-order FD method on a staggered grid in space, and a leap-frog technique and the Crank-Nicholson method in time. I simulate scattering from teleseismic P waves by using plane incident wave fronts and real topography from a 60 x 60 km area that includes the NORESS array of seismic receiver stations in southeastern Norway. Synthetic snapshots and seismograms of the wavefield show clear conversion from P to Rg (short-period fundamental-mode Rayleigh) waves in areas of rough topography, which is consistent with numerous observations. By parallelization on fast supercomputers, it is possible to model higher frequencies and/or larger areas than before.

  11. 3-D FEM Modeling of fiber/matrix interface debonding in UD composites including surface effects

    NASA Astrophysics Data System (ADS)

    Pupurs, A.; Varna, J.

    2012-02-01

    Fiber/matrix interface debond growth is one of the main mechanisms of damage evolution in unidirectional (UD) polymer composites. Because for polymer composites the fiber strain to failure is smaller than for the matrix multiple fiber breaks occur at random positions when high mechanical stress is applied to the composite. The energy released due to each fiber break is usually larger than necessary for the creation of a fiber break therefore a partial debonding of fiber/matrix interface is typically observed. Thus the stiffness reduction of UD composite is contributed both from the fiber breaks and from the interface debonds. The aim of this paper is to analyze the debond growth in carbon fiber/epoxy and glass fiber/epoxy UD composites using fracture mechanics principles by calculation of energy release rate GII. A 3-D FEM model is developed for calculation of energy release rate for fiber/matrix interface debonds at different locations in the composite including the composite surface region where the stress state differs from the one in the bulk composite. In the model individual partially debonded fiber is surrounded by matrix region and embedded in a homogenized composite.

  12. The Effects of Including Observed Means or Latent Means as Covariates in Multilevel Models for Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Aydin, Burak; Leite, Walter L.; Algina, James

    2016-01-01

    We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…

  13. Including oxygen enhancement ratio in ion beam treatment planning: model implementation and experimental verification.

    PubMed

    Scifoni, E; Tinganelli, W; Weyrather, W K; Durante, M; Maier, A; Krämer, M

    2013-06-01

    We present a method for adapting a biologically optimized treatment planning for particle beams to a spatially inhomogeneous tumor sensitivity due to hypoxia, and detected e.g., by PET functional imaging. The TRiP98 code, established treatment planning system for particles, has been extended for including explicitly the oxygen enhancement ratio (OER) in the biological effect calculation, providing the first set up of a dedicated ion beam treatment planning approach directed to hypoxic tumors, TRiP-OER, here reported together with experimental tests. A simple semi-empirical model for calculating the OER as a function of oxygen concentration and dose averaged linear energy transfer, generating input tables for the program is introduced. The code is then extended in order to import such tables coming from the present or alternative models, accordingly and to perform forward and inverse planning, i.e., predicting the survival response of differently oxygenated areas as well as optimizing the required dose for restoring a uniform survival effect in the whole irradiated target. The multiple field optimization results show how the program selects the best beam components for treating the hypoxic regions. The calculations performed for different ions, provide indications for the possible clinical advantages of a multi-ion treatment. Finally the predictivity of the code is tested through dedicated cell culture experiments on extended targets irradiation using specially designed hypoxic chambers, providing a qualitative agreement, despite some limits in full survival calculations arising from the RBE assessment. The comparison of the predictions resulting by using different model tables are also reported. PMID:23681217

  14. INTERIOR MODELS OF SATURN: INCLUDING THE UNCERTAINTIES IN SHAPE AND ROTATION

    SciTech Connect

    Helled, Ravit; Guillot, Tristan

    2013-04-20

    The accurate determination of Saturn's gravitational coefficients by Cassini could provide tighter constraints on Saturn's internal structure. Also, occultation measurements provide important information on the planetary shape which is often not considered in structure models. In this paper we explore how wind velocities and internal rotation affect the planetary shape and the constraints on Saturn's interior. We show that within the geodetic approach the derived physical shape is insensitive to the assumed deep rotation. Saturn's re-derived equatorial and polar radii at 100 mbar are found to be 54,445 {+-} 10 km and 60,365 {+-} 10 km, respectively. To determine Saturn's interior, we use one-dimensional three-layer hydrostatic structure models and present two approaches to include the constraints on the shape. These approaches, however, result in only small differences in Saturn's derived composition. The uncertainty in Saturn's rotation period is more significant: with Voyager's 10{sup h}39{sup m} period, the derived mass of heavy elements in the envelope is 0-7 M{sub Circled-Plus }. With a rotation period of 10{sup h}32{sup m}, this value becomes <4 M{sub Circled-Plus }, below the minimum mass inferred from spectroscopic measurements. Saturn's core mass is found to depend strongly on the pressure at which helium phase separation occurs, and is estimated to be 5-20 M{sub Circled-Plus }. Lower core masses are possible if the separation occurs deeper than 4 Mbar. We suggest that the analysis of Cassini's radio occultation measurements is crucial to test shape models and could lead to constraints on Saturn's rotation profile and departures from hydrostatic equilibrium.

  15. Power Calculations for General Linear Multivariate Models Including Repeated Measures Applications.

    PubMed

    Muller, Keith E; Lavange, Lisa M; Ramey, Sharon Landesman; Ramey, Craig T

    1992-12-01

    Recently developed methods for power analysis expand the options available for study design. We demonstrate how easily the methods can be applied by (1) reviewing their formulation and (2) describing their application in the preparation of a particular grant proposal. The focus is a complex but ubiquitous setting: repeated measures in a longitudinal study. Describing the development of the research proposal allows demonstrating the steps needed to conduct an effective power analysis. Discussion of the example also highlights issues that typically must be considered in designing a study. First, we discuss the motivation for using detailed power calculations, focusing on multivariate methods in particular. Second, we survey available methods for the general linear multivariate model (GLMM) with Gaussian errors and recommend those based on F approximations. The treatment includes coverage of the multivariate and univariate approaches to repeated measures, MANOVA, ANOVA, multivariate regression, and univariate regression. Third, we describe the design of the power analysis for the example, a longitudinal study of a child's intellectual performance as a function of mother's estimated verbal intelligence. Fourth, we present the results of the power calculations. Fifth, we evaluate the tradeoffs in using reduced designs and tests to simplify power calculations. Finally, we discuss the benefits and costs of power analysis in the practice of statistics. We make three recommendations: Align the design and hypothesis of the power analysis with the planned data analysis, as best as practical.Embed any power analysis in a defensible sensitivity analysis.Have the extent of the power analysis reflect the ethical, scientific, and monetary costs. We conclude that power analysis catalyzes the interaction of statisticians and subject matter specialists. Using the recent advances for power analysis in linear models can further invigorate the interaction. PMID:24790282

  16. A Thermal Evolution Model of the Earth Including the Biosphere, Continental Growth and Mantle Hydration

    NASA Astrophysics Data System (ADS)

    Höning, D.; Spohn, T.

    2014-12-01

    By harvesting solar energy and converting it to chemical energy, photosynthetic life plays an important role in the energy budget of Earth [2]. This leads to alterations of chemical reservoirs eventually affecting the Earth's interior [4]. It further has been speculated [3] that the formation of continents may be a consequence of the evolution life. A steady state model [1] suggests that the Earth without its biosphere would evolve to a steady state with a smaller continent coverage and a dryer mantle than is observed today. We present a model including (i) parameterized thermal evolution, (ii) continental growth and destruction, and (iii) mantle water regassing and outgassing. The biosphere enhances the production rate of sediments which eventually are subducted. These sediments are assumed to (i) carry water to depth bound in stable mineral phases and (ii) have the potential to suppress shallow dewatering of the underlying sediments and crust due to their low permeability. We run a Monte Carlo simulation for various initial conditions and treat all those parameter combinations as success which result in the fraction of continental crust coverage observed for present day Earth. Finally, we simulate the evolution of an abiotic Earth using the same set of parameters but a reduced rate of continental weathering and erosion. Our results suggest that the origin and evolution of life could have stabilized the large continental surface area of the Earth and its wet mantle, leading to the relatively low mantle viscosity we observe at present. Without photosynthetic life on our planet, the Earth would be geodynamical less active due to a dryer mantle, and would have a smaller fraction of continental coverage than observed today. References[1] Höning, D., Hansen-Goos, H., Airo, A., Spohn, T., 2014. Biotic vs. abiotic Earth: A model for mantle hydration and continental coverage. Planetary and Space Science 98, 5-13. [2] Kleidon, A., 2010. Life, hierarchy, and the

  17. Conversations with God: Prayer and Bargaining in Adjustment to Disability

    ERIC Educational Resources Information Center

    Rodriguez, Valerie J.; Glover-Graf, Noreen M.; Blanco, E. Lisette

    2013-01-01

    The role of religiosity and spirituality in the process of adjustment to disability is of increasing interest to rehabilitation professionals. Beginning with the Kubler-Ross models of grief and adjustment to disability and terminal illness, a number of stage models have included spiritual and religious interactions as a part of the adjustment…

  18. Numerical modelling of seawater intrusion in Shenzhen (China) using a 3D density-dependent model including tidal effects

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Yang, Qingchun; Martín, Jordi D.; Juncosa, Ricardo

    2013-04-01

    During the 1990s, groundwater overexploitation has resulted in seawater intrusion in the coastal aquifer of the Shenzhen city, China. Although water supply facilities have been improved and alleviated seawater intrusion in recent years, groundwater overexploitation is still of great concern in some local areas. In this work we present a three-dimensional density-dependent numerical model developed with the FEFLOW code, which is aimed at simulating the extent of seawater intrusion while including tidal effects and different groundwater pumping scenarios. Model calibration, using waterheads and reported chloride concentration, has been performed based on the data from 14 boreholes, which were monitored from May 2008 to December 2009. A fairly good fitness between the observed and computed values was obtained by a manual trial-and-error method. Model prediction has been carried out forward 3 years with the calibrated model taking into account high, medium and low tide levels and different groundwater exploitation schemes. The model results show that tide-induced seawater intrusion significantly affects the groundwater levels and concentrations near the estuarine of the Dasha river, which implies that an important hydraulic connection exists between this river and groundwater, even considering that some anti-seepage measures were taken in the river bed. Two pumping scenarios were considered in the calibrated model in order to predict the future changes in the water levels and chloride concentration. The numerical results reveal a decreased tendency of seawater intrusion if groundwater exploitation does not reach an upper bound of about 1.32 × 104 m3/d. The model results provide also insights for controlling seawater intrusion in such coastal aquifer systems.

  19. First Year Student Adjustment, Success, and Retention: Structural Models of Student Persistence Using Electronic Portfolios

    ERIC Educational Resources Information Center

    Sandler, Martin E.

    2010-01-01

    This study explores the deployment of electronic portfolios to a university-wide cohort of freshman undergraduates that included a subgroup of at-risk and lower academically prepared learners. Five evaluative dimensions based on persistence and engagement theory were included in the development of four assessment rubrics exploring goal clarity,…

  20. The Analysis of Repeated Measurements with Mixed-Model Adjusted "F" Tests

    ERIC Educational Resources Information Center

    Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D.

    2004-01-01

    One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…

  1. Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2011-01-01

    This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…

  2. Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model

    ERIC Educational Resources Information Center

    Li, Weidong; Rukavina, Paul

    2012-01-01

    In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…

  3. Extending Galactic Habitable Zone Modelling to Include the Emergence of Intelligent Life

    NASA Astrophysics Data System (ADS)

    Morrison, I. S.; Gowanlock, M. G.

    2014-03-01

    Previous studies of the Galactic Habitable Zone (GHZ) have been concerned with identifying those regions of the Galaxy that may favour the emergence of "complex life" - typically defined to be land-based life. A planet is deemed "habitable" if it meets a set of assumed criteria for supporting the emergence of such complex life. The notion of the GHZ, and the premise that sufficient chemical evolution is required for planet formation, was quantified by Gonzalez et al. (2001). This work was later broadened to include dangers to the formation and habitability of terrestrial planets by Lineweaver et al. (2004) and then studied using a Monte Carlo simulation on the resolution of individual stars in the previous work of Gowanlock et al. (2011). The model developed in the latter work considers the stellar number density distribution and formation history of the Galaxy, planet formation mechanisms and the hazards to planetary biospheres as a result of supernova sterilization events that take place in the vicinity of the planets. Based on timescales taken from the origin and evolution of complex life on Earth, the model suggests large numbers of potentially habitable planets exist in our Galaxy, with the greatest concentration likely being towards the inner Galaxy. In this work we extend the assessment of habitability to consider the potential for life to further evolve on habitable planets to the point of intelligence - which we term the propensity for the emergence of intelligent life. We assume the propensity is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the "resetting" effect of nearby supernovae. The model of Gowanlock et al. (2011) is used to produce a representative population of habitable planets by matching major observable properties of the Milky Way. Account is taken of the birth and death dates of each habitable planet and the timing of supernova events in each planet's vicinity. The times between

  4. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    NASA Astrophysics Data System (ADS)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  5. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    NASA Astrophysics Data System (ADS)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( < 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  6. Evaluation of an Impedance Model for Perforates Including the Effect of Bias Flow

    NASA Technical Reports Server (NTRS)

    Betts, J. F.; Follet, J. I.; Kelly, J. J.; Thomas, R. H.

    2000-01-01

    A new bias flow impedance model is developed for perforated plates from basic principles using as little empiricisms as possible. A quality experimental database was used to determine the predictive validity of the model. Results show that the model performs better for higher (15%) rather than lower (5%) percent open area (POA) samples. Based on the least squares ratio of numerical vs. experimental results, model predictions were on average within 20% and 30% for the higher and lower (POA), respectively. It is hypothesized on the work of other investigators that at lower POAs the higher fluid velocities in the perforate's orifices start forming unsteady vortices, which is not accounted for in our model. The numerical model, in general also underpredicts the experiments. It is theorized that the actual acoustic C(sub D) is lower than the measured raylometer C(sub D) used in the model. Using a larger C(sub D) makes the numerical model predict lower impedances. The frequency domain model derived in this paper shows very good agreement with another model derived using a time domain approach.

  7. Adjustment of carbon fluxes to light conditions regulates the daily turnover of starch in plants: a computational model.

    PubMed

    Pokhilko, Alexandra; Flis, Anna; Sulpice, Ronan; Stitt, Mark; Ebenhöh, Oliver

    2014-03-01

    In the light, photosynthesis provides carbon for metabolism and growth. In the dark, plant growth depends on carbon reserves that were accumulated during previous light periods. Many plants accumulate part of their newly-fixed carbon as starch in their leaves in the day and remobilise it to support metabolism and growth at night. The daily rhythms of starch accumulation and degradation are dynamically adjusted to the changing light conditions such that starch is almost but not totally exhausted at dawn. This requires the allocation of a larger proportion of the newly fixed carbon to starch under low carbon conditions, and the use of information about the carbon status at the end of the light period and the length of the night to pace the rate of starch degradation. This regulation occurs in a circadian clock-dependent manner, through unknown mechanisms. We use mathematical modelling to explore possible diurnal mechanisms regulating the starch level. Our model combines the main reactions of carbon fixation, starch and sucrose synthesis, starch degradation and consumption of carbon by sink tissues. To describe the dynamic adjustment of starch to daily conditions, we introduce diurnal regulators of carbon fluxes, which modulate the activities of the key steps of starch metabolism. The sensing of the diurnal conditions is mediated in our model by the timer α and the "dark sensor"β, which integrate daily information about the light conditions and time of the day through the circadian clock. Our data identify the β subunit of SnRK1 kinase as a good candidate for the role of the dark-accumulated component β of our model. The developed novel approach for understanding starch kinetics through diurnal metabolic and circadian sensors allowed us to explain starch time-courses in plants and predict the kinetics of the proposed diurnal regulators under various genetic and environmental perturbations.

  8. Extension of an atmospheric dispersion model to include building wake effects

    SciTech Connect

    Weil, J.C.; Brower, R.P.; Corio, L.A.

    1999-07-01

    A modification to a dispersion model for the convective boundary layer (CBL) is proposed to deal with stack sources located on or near buildings and affected by the turbulent wake of the building. Wake effects are greatest within the near wake or cavity region close to the building. The approach is to combine an earlier wake model with the CBL model such that the appropriate concentration and dispersion limits are satisfied at short, intermediate, and large downwind distances.

  9. Threshold voltage model of junctionless cylindrical surrounding gate MOSFETs including fringing field effects

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh Kumar

    2015-12-01

    2D Analytical model of the body center potential (BCP) in short channel junctionless Cylindrical Surrounding Gate (JLCSG) MOSFETs is developed using evanescent mode analysis (EMA). This model also incorporates the gate bias dependent inner and outer fringing capacitances due to the gate-source/drain fringing fields. The developed model provides results in good agreement with simulated results for variations of different physical parameters of JLCSG MOSFET viz. gate length, channel radius, doping concentration, and oxide thickness. Using the BCP, an analytical model for the threshold voltage has been derived and validated against results obtained from 3D device simulator.

  10. A New Finite-Conductivity Droplet Evaporation Model Including Liquid Turbulence Effect

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.; Trinh, H. P.

    2006-01-01

    A new approach to account for finite thermal conductivity and turbulence effects within atomizing droplets of an evaporating spray is presented in this paper. The model is an extension of the T-blob and T-TAB atomization/spray model of Trinh and Chen [9]. This finite conductivity model is based on the two-temperature film theory in which the turbulence characteristics of the droplet are used to estimate the effective thermal diffusivity for the liquid-side film thickness. Both one-way and two-way coupled calculations were performed to investigate the performance cf this model against the published experimental data.

  11. Dietary reference intakes for zinc may require adjustment for phytate intake based upon model predictions.

    PubMed

    Hambidge, K Michael; Miller, Leland V; Westcott, Jamie E; Krebs, Nancy F

    2008-12-01

    The quantity of total dietary zinc (Zn) and phytate are the principal determinants of the quantity of absorbed Zn. Recent estimates of Dietary Reference Intakes (DRI) for Zn by the Institute of Medicine (IOM) were based on data from low-phytate or phytate-free diets. The objective of this project was to estimate the effects of increasing quantities of dietary phytate on these DRI. We used a trivariate model of the quantity of Zn absorbed as a function of dietary Zn and phytate with updated parameters to estimate the phytate effect on the Estimated Average Requirement (EAR) and Recommended Daily Allowance for Zn for both men and women. The EAR predicted from the model at 0 phytate was very close to the EAR of the IOM. The addition of 1000 mg phytate doubled the EAR and adding 2000 mg phytate tripled the EAR. The model also predicted that the EAR for men and women could not be attained with phytate:Zn molar ratios > 11:1 and 15:1, respectively. The phytate effect on upper limits (UL) was predicted by first estimating the quantity of absorbed Zn corresponding to the UL of 40 mg for phytate-free diets, which is 6.4 mg Zn/d. Extrapolation of the model suggested, for example, that with 900 mg/d phytate, 100 mg dietary Zn is required to attain 6.4 mg absorbed Zn/d. Experimental studies with higher Zn intakes are required to test these predictions.

  12. Citizens' Perceptions of Flood Hazard Adjustments: An Application of the Protective Action Decision Model

    ERIC Educational Resources Information Center

    Terpstra, Teun; Lindell, Michael K.

    2013-01-01

    Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…

  13. Preserving Heterogeneity and Consistency in Hydrological Model Inversions by Adjusting Pedotransfer Functions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Numerical modeling is the dominant method for quantifying water flow and the transport of dissolved constituents in surface soils as well as the deeper vadose zone. While the fundamental laws that govern the mechanics of the flow processes in terms of Richards' and convection-dispersion equations a...

  14. Glacial isostatic adjustment associated with the Barents Sea ice sheet: A modelling inter-comparison

    NASA Astrophysics Data System (ADS)

    Auriac, A.; Whitehouse, P. L.; Bentley, M. J.; Patton, H.; Lloyd, J. M.; Hubbard, A.

    2016-09-01

    The 3D geometrical evolution of the Barents Sea Ice Sheet (BSIS), particularly during its late-glacial retreat phase, remains largely ambiguous due to the paucity of direct marine- and terrestrial-based evidence constraining its horizontal and vertical extent and chronology. One way of validating the numerous BSIS reconstructions previously proposed is to collate and apply them under a wide range of Earth models and to compare prognostic (isostatic) output through time with known relative sea-level (RSL) data. Here we compare six contrasting BSIS load scenarios via a spherical Earth system model and derive a best-fit, χ2 parameter using RSL data from the four main terrestrial regions within the domain: Svalbard, Franz Josef Land, Novaya Zemlya and northern Norway. Poor χ2 values allow two load scenarios to be dismissed, leaving four that agree well with RSL observations. The remaining four scenarios optimally fit the RSL data when combined with Earth models that have an upper mantle viscosity of 0.2-2 × 1021 Pa s, while there is less sensitivity to the lithosphere thickness (ranging from 71 to 120 km) and lower mantle viscosity (spanning 1-50 × 1021 Pa s). GPS observations are also compared with predictions of present-day uplift across the Barents Sea. Key locations where relative sea-level and GPS data would prove critical in constraining future ice-sheet modelling efforts are also identified.

  15. A Gender-Moderated Model of Family Relationships and Adolescent Adjustment

    ERIC Educational Resources Information Center

    Elizur, Yoel; Spivak, Amos; Ofran, Shlomit; Jacobs, Shira

    2007-01-01

    The objective of this study was to explain why adolescent girls with conduct problems (CP) are more at risk than boys to develop emotional distress (ED) in a sample composed of Israeli-born and immigrant youth from Ethiopia and the former Soviet Union (n = 305, ages 14-18). We tested a structural equation model and found a very good fit to the…

  16. A Unified Model Exploring Parenting Practices as Mediators of Marital Conflict and Children's Adjustment

    ERIC Educational Resources Information Center

    Coln, Kristen L.; Jordan, Sara S.; Mercer, Sterett H.

    2013-01-01

    We examined positive and negative parenting practices and psychological control as mediators of the relations between constructive and destructive marital conflict and children's internalizing and externalizing problems in a unified model. Married mothers of 121 children between the ages of 6 and 12 completed questionnaires measuring marital…

  17. A Conceptual Model of Training Transfer that Includes the Physical Environment

    ERIC Educational Resources Information Center

    Hillsman, Terron L.; Kupritz, Virginia W.

    2007-01-01

    The study presents the physical environment as an emerging factor impacting training transfer and proposes to position this variable in the Baldwin and Ford (1988) model of the training transfer process. The amended model positions workplace design, one element of the physical environment, as a part of organizational context in the work…

  18. Thermal modeling of phase change solidification in thermal control devices including natural convection effects

    NASA Technical Reports Server (NTRS)

    Ukanwa, A. O.; Stermole, F. J.; Golden, J. O.

    1972-01-01

    Natural convection effects in phase change thermal control devices were studied. A mathematical model was developed to evaluate natural convection effects in a phase change test cell undergoing solidification. Although natural convection effects are minimized in flight spacecraft, all phase change devices are ground tested. The mathematical approach to the problem was to first develop a transient two-dimensional conduction heat transfer model for the solidification of a normal paraffin of finite geometry. Next, a transient two-dimensional model was developed for the solidification of the same paraffin by a combined conduction-natural-convection heat transfer model. Throughout the study, n-hexadecane (n-C16H34) was used as the phase-change material in both the theoretical and the experimental work. The models were based on the transient two-dimensional finite difference solutions of the energy, continuity, and momentum equations.

  19. Modeling Shock Propagation to the Outer Heliosphere Including Heat Flux and Pickup Protons

    NASA Astrophysics Data System (ADS)

    Detman, T. R.; Intriligator, D. S.; Dryer, M.; Sun, W.; Deehr, C. S.; Intriligator, J.

    2012-12-01

    We compare different models of solar wind heat flux in the distant heliosphere in the context of simulating the propagation of the strong Halloween 2003 solar events to ACE, Ulysses, Cassini, and Voyager 2. We will modify our time-dependent, 3D MHD Hybrid Heliospheric Modeling System with Pickup Ions, HHMS-PI (Detman, et al.,JGR, 2011; Intriligator, et al., JGR, 2012) by installing an approximation of the Hollweg Collisionless Electron Heat Flux model (Hollweg, JGR, 1976). We evaluate each simulation against observations at ACE, Ulysses, and Voyager 2. We will compare results from HHMS-PI with heat flux against our previous results. We then plan to make similar comparisons with other heat flux models, e.g. the model based on field magnitude by Scime, et al., (JGR, 1995).

  20. A Sheath Model for Negative Ion Sources Including the Formation of a Virtual Cathode

    SciTech Connect

    McAdams, R.; King, D. B.; Surrey, E.

    2011-09-26

    A one dimensional model of the sheath between the plasma and the wall in a negative ion source has been developed. The plasma consists of positive ions, electrons and negative ions. The model takes into account the emission of negative ions from the wall into the sheath and thus represents the conditions in a caesiated ion source with surface production of negative ions. At high current densities of the emitted negative ions, the sheath is unable to support the transport of all the negative ions to the plasma and a virtual cathode is formed. This model takes this into account and allows the calculation of the transported negative ions across the sheath with the virtual cathode. The model has been extended to allow the linkage between plasma conditions at the sheath edge and the plasma to be made. Comparisons are made between the results of the model and experimental measurements.

  1. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    NASA Astrophysics Data System (ADS)

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen

    2016-08-01

    Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.

  2. Shaft adjuster

    DOEpatents

    Harry, H.H.

    1988-03-11

    Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.

  3. Shaft adjuster

    DOEpatents

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  4. Simulated village locations in Thailand: A multi-scale model including a neural network approach

    PubMed Central

    Malanson, George P.; Entwisle, Barbara

    2010-01-01

    The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area. PMID:21399748

  5. Modeling the development of biofilm density including active bacteria, inert biomass, and extracellular polymeric substances.

    PubMed

    Laspidou, Chrysi S; Rittmann, Bruce E

    2004-01-01

    We present the unified multi-component cellular automaton (UMCCA) model, which predicts quantitatively the development of the biofilm's composite density for three biofilm components: active bacteria, inert or dead biomass, and extracellular polymeric substances. The model also describes the concentrations of three soluble organic components (soluble substrate and two types of soluble microbial products) and oxygen. The UMCCA model is a hybrid discrete-differential mathematical model and introduces the novel feature of biofilm consolidation. Our hypothesis is that the fluid over the biofilm creates pressures and vibrations that cause the biofilm to consolidate, or pack itself to a higher density over time. Each biofilm compartment in the model output consolidates to a different degree that depends on the age of its biomass. The UMCCA model also adds a cellular automaton algorithm that identifies the path of least resistance and directly moves excess biomass along that path, thereby ensuring that the excess biomass is distributed efficiently. A companion paper illustrates the trends that the UMCCA model is able to represent and shows a comparison with experimental results. PMID:15276752

  6. A feedback model for leukemia including cell competition and the action of the immune system

    NASA Astrophysics Data System (ADS)

    Balea, S.; Halanay, A.; Neamtu, M.

    2014-12-01

    A mathematical model, coupling the dynamics of short-term stem-like cells and mature leukocytes in leukemia with that of the immune system, is investigated. The model is described by a system of nine delay differential equations with nine delays. Three equilibrium points E0, E1, E2 are highlighted. The stability and the existence of the Hopf bifurcation for the equilibrium points are investigated. In the analysis of the model, the rate of asymmetric division and the rate of symmetric division are very important.

  7. Sensitivity of palaeotidal models of the northwest European shelf seas to glacial isostatic adjustment since the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Ward, Sophie L.; Neill, Simon P.; Scourse, James D.; Bradley, Sarah L.; Uehara, Katsuto

    2016-11-01

    The spatial and temporal distribution of relative sea-level change over the northwest European shelf seas has varied considerably since the Last Glacial Maximum, due to eustatic sea-level rise and a complex isostatic response to deglaciation of both near- and far-field ice sheets. Because of the complex pattern of relative sea level changes, the region is an ideal focus for modelling the impact of significant sea-level change on shelf sea tidal dynamics. Changes in tidal dynamics influence tidal range, the location of tidal mixing fronts, dissipation of tidal energy, shelf sea biogeochemistry and sediment transport pathways. Significant advancements in glacial isostatic adjustment (GIA) modelling of the region have been made in recent years, and earlier palaeotidal models of the northwest European shelf seas were developed using output from less well-constrained GIA models as input to generate palaeobathymetric grids. We use the most up-to-date and well-constrained GIA model for the region as palaeotopographic input for a new high resolution, three-dimensional tidal model (ROMS) of the northwest European shelf seas. With focus on model output for 1 ka time slices from the Last Glacial Maximum (taken as being 21 ka BP) to present day, we demonstrate that spatial and temporal changes in simulated tidal dynamics are very sensitive to relative sea-level distribution. The new high resolution palaeotidal model is considered a significant improvement on previous depth-averaged palaeotidal models, in particular where the outputs are to be used in sediment transport studies, where consideration of the near-bed stress is critical, and for constraining sea level index points.

  8. A Model for Predicting Grain Boundary Cracking in Polycrystalline Viscoplastic Materials Including Scale Effects

    SciTech Connect

    Allen, D.H.; Helms, K.L.E.; Hurtado, L.D.

    1999-04-06

    A model is developed herein for predicting the mechanical response of inelastic crystalline solids. Particular emphasis is given to the development of microstructural damage along grain boundaries, and the interaction of this damage with intragranular inelasticity caused by dislocation dissipation mechanisms. The model is developed within the concepts of continuum mechanics, with special emphasis on the development of internal boundaries in the continuum by utilizing a cohesive zone model based on fracture mechanics. In addition, the crystalline grains are assumed to be characterized by nonlinear viscoplastic mechanical material behavior in order to account for dislocation generation and migration. Due to the nonlinearities introduced by the crack growth and viscoplastic constitution, a numerical algorithm is utilized to solve representative problems. Implementation of the model to a finite element computational algorithm is therefore briefly described. Finally, sample calculations are presented for a polycrystalline titanium alloy with particular focus on effects of scale on the predicted response.

  9. An accurate simulation model for single-photon avalanche diodes including important statistical effects

    NASA Astrophysics Data System (ADS)

    Qiuyang, He; Yue, Xu; Feifei, Zhao

    2013-10-01

    An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate.

  10. The Dynamic Modelling of a Spur Gear in Mesh Including Friction and a Crack

    NASA Astrophysics Data System (ADS)

    Howard, Ian; Jia, Shengxiang; Wang, Jiande

    2001-09-01

    To improve the current generation of diagnostic techniques, many researchers are actively developing advanced dynamic models of gear case vibration to ascertain the effect of different types of gear train damage. This paper details a simplified gear dynamic model aimed at exploring the effect of friction on the resultant gear case vibration. The model incorporates the effect of variations in gear tooth torsional mesh stiffness, developed using finite element analysis, as the gears mesh together. The method of introducing the frictional force between teeth into the dynamic equations is given. The comparison between the results with friction and without friction was investigated using Matlab and Simulink models developed from the differential equations. The effects the single tooth crack has on the frequency spectrum and on the common diagnostic functions of the resulting gearbox component vibrations are also shown.

  11. Global warming in a coupled climate model including oceanic eddy-induced advection

    NASA Astrophysics Data System (ADS)

    Hirst, Anthony C.; Gordon, Hal B.; O'Farrell, Siobhan P.

    The Gent and McWilliams (GM) parameterization for large-scale water transport caused by mesoscale oceanic eddies is introduced into the oceanic component of a global coupled ocean-atmosphere model. Parallel simulations with and without the GM scheme are performed to examine the effect of this parameterization on model behavior under constant atmospheric CO2 and on the model response to increasing CO2. The control (constant CO2) runs show substantial differences in the oceanic stratification and extent of convection, similar to differences found previously using uncoupled ocean models. The transient (increasing CO2) runs show moderate differences in the rate of oceanic heat sequestration (less in the GM case), as expected based on passive tracer uptake studies. However, the surface warming is weaker in the GM case, especially over the Southern Ocean, which is contrary to some recent supposition. Reasons for the reduced warming in the GM case are discussed.

  12. Progress in turbulence modeling for complex flow fields including effects of compressibility

    NASA Technical Reports Server (NTRS)

    Wilcox, D. C.; Rubesin, M. W.

    1980-01-01

    Two second-order-closure turbulence models were devised that are suitable for predicting properties of complex turbulent flow fields in both incompressible and compressible fluids. One model is of the "two-equation" variety in which closure is accomplished by introducing an eddy viscosity which depends on both a turbulent mixing energy and a dissipation rate per unit energy, that is, a specific dissipation rate. The other model is a "Reynolds stress equation" (RSE) formulation in which all components of the Reynolds stress tensor and turbulent heat-flux vector are computed directly and are scaled by the specific dissipation rate. Computations based on these models are compared with measurements for the following flow fields: (a) low speed, high Reynolds number channel flows with plane strain or uniform shear; (b) equilibrium turbulent boundary layers with and without pressure gradients or effects of compressibility; and (c) flow over a convex surface with and without a pressure gradient.

  13. A 1D coupled Schroedinger drift-diffusion model including collisions

    SciTech Connect

    Baro, M. . E-mail: baro@wias-berlin.de; Abdallah, N. Ben . E-mail: naoufel@mip.ups-tlse.fr; Degond, P. . E-mail: degond@mip.ups-tlse.fr; El Ayyadi, A. . E-mail: elayyadi@mathematik.uni-mainz.de

    2005-02-10

    We consider a one-dimensional coupled stationary Schroedinger drift-diffusion model for quantum semiconductor device simulations. The device domain is decomposed into a part with large quantum effects (quantum zone) and a part where quantum effects are negligible (classical zone). We give boundary conditions at the classic-quantum interface which are current preserving. Collisions within the quantum zone are introduced via a Pauli master equation. To illustrate the validity we apply the model to three resonant tunneling diodes.

  14. Accurate and efficient modeling of global seismic wave propagation for an attenuative Earth model including the center

    NASA Astrophysics Data System (ADS)

    Toyokuni, Genti; Takenaka, Hiroshi

    2012-06-01

    We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic

  15. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation

    NASA Astrophysics Data System (ADS)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2011-12-01

    Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.

  16. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  17. A two-phase solid/fluid model for dense granular flows including dilatancy effects

    NASA Astrophysics Data System (ADS)

    Mangeney, Anne; Bouchut, Francois; Fernandez-Nieto, Enrique; Koné, El-Hadj; Narbona-Reina, Gladys

    2016-04-01

    Describing grain/fluid interaction in debris flows models is still an open and challenging issue with key impact on hazard assessment [{Iverson et al.}, 2010]. We present here a two-phase two-thin-layer model for fluidized debris flows that takes into account dilatancy effects. It describes the velocity of both the solid and the fluid phases, the compression/dilatation of the granular media and its interaction with the pore fluid pressure [{Bouchut et al.}, 2016]. The model is derived from a 3D two-phase model proposed by {Jackson} [2000] based on the 4 equations of mass and momentum conservation within the two phases. This system has 5 unknowns: the solid and fluid velocities, the solid and fluid pressures and the solid volume fraction. As a result, an additional equation inside the mixture is necessary to close the system. Surprisingly, this issue is inadequately accounted for in the models that have been developed on the basis of Jackson's work [{Bouchut et al.}, 2015]. In particular, {Pitman and Le} [2005] replaced this closure simply by imposing an extra boundary condition at the surface of the flow. When making a shallow expansion, this condition can be considered as a closure condition. However, the corresponding model cannot account for a dissipative energy balance. We propose here an approach to correctly deal with the thermodynamics of Jackson's model by closing the mixture equations by a weak compressibility relation following {Roux and Radjai} [1998]. This relation implies that the occurrence of dilation or contraction of the granular material in the model depends on whether the solid volume fraction is respectively higher or lower than a critical value. When dilation occurs, the fluid is sucked into the granular material, the pore pressure decreases and the friction force on the granular phase increases. On the contrary, in the case of contraction, the fluid is expelled from the mixture, the pore pressure increases and the friction force diminishes. To

  18. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes

    SciTech Connect

    García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M.; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel

    2015-01-15

    Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.

  19. A model of protein translation including codon bias, nonsense errors, and ribosome recycling.

    PubMed

    Gilchrist, Michael A; Wagner, Andreas

    2006-04-21

    We present and analyse a model of protein translation at the scale of an individual messenger RNA (mRNA) transcript. The model we develop is unique in that it incorporates the phenomena of ribosome recycling and nonsense errors. The model conceptualizes translation as a probabilistic wave of ribosome occupancy traveling down a heterogeneous medium, the mRNA transcript. Our results show that the heterogeneity of the codon translation rates along the mRNA results in short-scale spikes and dips in the wave. Nonsense errors attenuate this wave on a longer scale while ribosome recycling reinforces it. We find that the combination of nonsense errors and codon usage bias can have a large effect on the probability that a ribosome will completely translate a transcript. We also elucidate how these forces interact with ribosome recycling to determine the overall translation rate of an mRNA transcript. We derive a simple cost function for nonsense errors using our model and apply this function to the yeast (Saccharomyces cervisiae) genome. Using this function we are able to detect position dependent selection on codon bias which correlates with gene expression levels as predicted a priori. These results indirectly validate our underlying model assumptions and confirm that nonsense errors can play an important role in shaping codon usage bias. PMID:16171830

  20. A bone remodelling model including the effect of damage on the steering of BMUs.

    PubMed

    Martínez-Reina, J; Reina, I; Domínguez, J; García-Aznar, J M

    2014-04-01

    Bone remodelling in cortical bone is performed by the so-called basic multicellular units (BMUs), which produce osteons after completing the remodelling sequence. Burger et al. (2003) hypothesized that BMUs follow the direction of the prevalent local stress in the bone. More recently, Martin (2007) has shown that BMUs must be somehow guided by microstructural damage as well. The interaction of both variables, strain and damage, in the guidance of BMUs has been incorporated into a bone remodelling model for cortical bone. This model accounts for variations in porosity, anisotropy and damage level. The bone remodelling model has been applied to a finite element model of the diaphysis of a human femur. The trajectories of the BMUs have been analysed throughout the diaphysis and compared with the orientation of osteons measured experimentally. Some interesting observations, like the typical fan arrangement of osteons near the periosteum, can be explained with the proposed remodelling model. Moreover, the efficiency of BMUs in damage repairing has been shown to be greater if BMUs are guided by damage.

  1. Simulation of tumor induced angiogenesis using an analytical adaptive modeling including dynamic sprouting and blood flow modeling.

    PubMed

    Naghavi, Nadia; Hosseini, Farideh S; Sardarabadi, Mohammad; Kalani, Hadi

    2016-09-01

    In this paper, an adaptive model for tumor induced angiogenesis is developed that integrates generation and diffusion of a growth factor originated from hypoxic cells, adaptive sprouting from a parent vessel, blood flow and structural adaptation. The proposed adaptive sprout spacing model (ASS) determines position, time and number of sprouts which are activated from a parent vessel and also the developed vascular network is modified by a novel sprout branching prediction algorithm. This algorithm couples local vascular endothelial growth factor (VEGF) concentrations, stresses due to the blood flow and stochastic branching to the structural reactions of each vessel segment in response to mechanical and biochemical stimuli. The results provide predictions for the time-dependent development of the network structure, including the position and diameters of each segment and the resulting distributions of blood flow and VEGF. Considering time delays between sprout progressions and number of sprouts activated at different time durations provides information about micro-vessel density in the network. Resulting insights could be useful for motivating experimental investigations of vascular pattern in tumor induced angiogenesis and development of therapies targeting angiogenesis. PMID:27179697

  2. An independent-atom-model description of ion-molecule collisions including geometric screening corrections

    NASA Astrophysics Data System (ADS)

    Lüdde, Hans Jürgen; Achenbach, Alexander; Kalkbrenner, Thilo; Jankowiak, Hans-Christian; Kirchner, Tom

    2016-04-01

    A new model to account for geometric screening corrections in an independent-atom-model description of ion-molecule collisions is introduced. The ion-molecule cross sections for net capture and net ionization are represented as weighted sums of atomic cross sections with weight factors that are determined from a geometric model of overlapping cross section areas. Results are presented for proton collisions with targets ranging from diatomic to complex polyatomic molecules. Significant improvement compared to simple additivity rule results and in general good agreement with experimental data are found. The flexibility of the approach opens up the possibility to study more detailed observables such as orientation-dependent and charge-state-correlated cross sections for a large class of complex targets ranging from biomolecules to atomic clusters.

  3. Hybrid Model for Plasma Thruster Plume Simulation Including PIC-MCC Electrons Treatment

    SciTech Connect

    Alexandrov, A. L.; Bondar, Ye. A.; Schweigert, I. V.

    2008-12-31

    The simulation of stationary plasma thruster plume is important for spacecraft design due to possible interaction plume with spacecraft surface. Such simulations are successfully performed using the particle-in-cell technique for describing the motion of charged particles, namely the propellant ions. In conventional plume models the electrons are treated using various fluid approaches. In this work, we suggest an alternative approach, where the electron kinetics is considered 'ab initio', using the particle-in-cell--Monte Carlo collision method. To avoid the large computational expenses due to small time steps, the relaxation of simulated plume plasma is split into the fast relaxation of the electrons distribution function and the slow one of the ions. The model is self-consistent but hybrid, since the simultaneous electron and ion motion is not really modeled. The obtained electron temperature profile is in good agreement with experiment.

  4. Callisto plasma interactions: Hybrid modeling including induction by a subsurface ocean

    NASA Astrophysics Data System (ADS)

    Lindkvist, Jesper; Holmström, Mats; Khurana, Krishan K.; Fatemi, Shahab; Barabash, Stas

    2015-06-01

    By using a hybrid plasma solver (ions as particles and electrons as a fluid), we have modeled the interaction between Callisto and Jupiter's magnetosphere for variable ambient plasma parameters. We compared the results with the magnetometer data from flybys (C3, C9, and C10) by the Galileo spacecraft. Modeling the interaction between Callisto and Jupiter's magnetosphere is important to establish the origin of the magnetic field perturbations observed by Galileo and thought to be related to a subsurface ocean. Using typical upstream magnetospheric plasma parameters and a magnetic dipole corresponding to the inductive response inside the moon, we show that the model results agree well with observations for the C3 and C9 flybys, but agrees poorly with the C10 flyby close to Callisto. The study does support the existence of a subsurface ocean at Callisto.

  5. Finding practical phenomenological models that include both photoresist behavior and etch process effects

    NASA Astrophysics Data System (ADS)

    Jung, Sunwook; Do, Thuy; Sturtevant, John

    2015-03-01

    For more than five decades, the semiconductor industry has overcome technology challenges with innovative ideas that have continued to enable Moore's Law. It is clear that multi-patterning lithography is vital for 20nm half pitch using 193i. Multi-patterning exposure sequences and pattern multiplication processes can create complicated tolerance accounting due to the variability associated with the component processes. It is essential to ensure good predictive accuracy of compact etch models used in multipatterning simulation. New modelforms have been developed to account for etch bias behavior at 20 nm and below. The new modeling components show good results in terms of global fitness and some improved predication capability for specific features. We've also investigated a new methodology to make the etch model aware of 3D resist profiles.

  6. 3D frequency airborne electromagnetic modeling including topography with direct solution

    NASA Astrophysics Data System (ADS)

    Li, W.; Zeng, Z.

    2015-12-01

    Three-dimensional modeling of frequency airborne electromagnetic data is vital to improve the understanding of electromagnetic (EM) responses collected in increasingly complex geologic settings. We developed a modeling scheme for 3D airborne electromagnetic modeling in frequency domain with topography using edge finite element. The rectangular mesh can be transformed to hexahedral in order to simulate the topography effect. The finite element algorithm uses a single edge shape function at each edge of hexahedral elements, guaranteeing the continuity of the tangential electric field while conserving the continuity of magnetic flux at boundaries. Sources singularities are eliminated through a secondary-field approach, in which the primary fields are computed analytically for a homogeneous or a 1D layered background; the secondary fields are computed using edge finite element. The solution of the linear system of equations was obtained using a massive parallel multifrontal solver, because such solver are robust for indefinite and ill-conditioned linear systems. Parallel computing were investigated for their use in mitigating the computational overburden associated with the use of a direct solver, and of its feasibility for 3D frequency airborne electromagnetic forward modeling with the edge finite element. For the multisource problem, when using a direct solver, only competitive if the same factors are used to achieve a solution for multi right-hand sides. We tested our proposed approach using 1D and 3D synthetic models, and they demonstrated it is robust and suitable for 3D frequency airborne electromagnetic modeling. The codes could thus be used to help design new survey, as well to estimate subsurface conductivities through the implementation of an appropriate inversion scheme.

  7. Including new equatorial African data in global Holocene magnetic field models

    NASA Astrophysics Data System (ADS)

    Korte, M.; Brown, M.; Frank, U.

    2012-04-01

    Global paleomagnetic field reconstructions of the Holocene are a useful tool to study the past evolution of the geomagnetic field at the Earth's surface and the core-mantle boundary, or to estimate shielding against galactic cosmic rays. This protection is currently weak over the South Atlantic anomaly, a feature stretching between South America and Africa. Knowledge of the long-term evolution of this anomaly and whether there are preferred longitudinal ranges of weak fields is required for a better understanding of the geodynamo process and to estimate past magnetic shielding, e.g., for any studies involving the production of cosmogenic isotopes. The distribution of archeo- and paleomagnetic data available for global field reconstructions is highly inhomogeneous. It is strongly biased towards Europe and particularly sparse for Africa and South America. New data from these regions are necessary to confirm or improve field descriptions in Holocene spherical harmonic magnetic field models particularly for the evolution of this presently anomalous region. We present new inclination and relative intensity records from two neighbouring lakes in southern Ethiopia: Chew Bahir and Lake Chamo. Measurements were taken on three sediment cores from Chew Bahir, in which the complete Holocene is preserved in the topmost 4 m, and one 17 m long composite profile from Lake Chamo, which spans approximately the last 7 ka. Our age models are constrained by 10 AMS radiocarbon ages through the Holocene. We investigate the influence of these new records on magnetic field models CALS3k.4 and CALS10k.1b by augmenting previously modeled data with our new data and performing the modeling with otherwise unchanged parameters. Model predictions particularly for the equatorial African region and surroundings are compared and differences discussed.

  8. SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans

    SciTech Connect

    Young, L; Yang, F

    2014-06-01

    Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.

  9. A Two-Phase Solid/Fluid Model for Dense Granular Flows Including Dilatancy Effects

    NASA Astrophysics Data System (ADS)

    Mangeney, Anne; Bouchut, Francois; Fernandez-Nieto, Enrique; Narbona-Reina, Gladys

    2015-04-01

    We propose a thin layer depth-averaged two-phase model to describe solid-fluid mixtures such as debris flows. It describes the velocity of the two phases, the compression/dilatation of the granular media and its interaction with the pore fluid pressure, that itself modifies the friction within the granular phase (Iverson et al., 2010). The model is derived from a 3D two-phase model proposed by Jackson (2000) based on the 4 equations of mass and momentum conservation within the two phases. This system has 5 unknowns: the solid and fluid velocities, the solid and fluid pressures and the solid volume fraction. As a result, an additional equation inside the mixture is necessary to close the system. Surprisingly, this issue is inadequately accounted for in the models that have been developed on the basis of Jackson's work (Bouchut et al., 2014). In particular, Pitman and Le replaced this closure simply by imposing an extra boundary condition at the surface of the flow. When making a shallow expansion, this condition can be considered as a closure condition. However, the corresponding model cannot account for a dissipative energy balance. We propose here an approach to correctly deal with the thermodynamics of Jackson's equations. We close the mixture equations by a weak compressibility relation involving a critical density, or equivalently a critical pressure. Moreover, we relax one boundary condition, making it possible for the fluid to escape the granular media when compression of the granular mass occurs. Furthermore, we introduce second order terms in the equations making it possible to describe the evolution of the pore fluid pressure in response to the compression/dilatation of the granular mass without prescribing an extra ad-hoc equation for the pore pressure. We prove that the energy balance associated with this Jackson closure is dissipative, as well as its thin layer associated model. We present several numerical tests for the 1D case that are compared to the

  10. A Two-Phase Solid/Fluid Model for Dense Granular Flows Including Dilatancy Effects

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Bouchut, F.; Fernández-Nieto, E. D.; Narbona-Reina, G.; Kone, E. H.

    2014-12-01

    We propose a thin layer depth-averaged two-phase model to describe solid-fluid mixtures such as debris flows. It describes the velocity of the two phases, the compression/dilatation of the granular media and its interaction with the pore fluid pressure, that itself modifies the friction within the granular phase (Iverson et al., 2010). The model is derived from a 3D two-phase model proposed by Jackson (2000) based on the 4 equations of mass and momentum conservation within the two phases. This system has 5 unknowns: the solid and fluid velocities, the solid and fluid pressures and the solid volume fraction. As a result, an additional equation inside the mixture is necessary to close the system. Surprisingly, this issue is inadequately accounted for in the models that have been developed on the basis of Jackson's work (Bouchut et al., 2014). In particular, Pitman and Le replaced this closure simply by imposing an extra boundary condition at the surface of the flow. When making a shallow expansion, this condition can be considered as a closure condition. However, the corresponding model cannot account for a dissipative energy balance. We propose here an approach to correctly deal with the thermodynamics of Jackson's equations. We close the mixture equations by a weak compressibility relation involving a critical density, or equivalently a critical pressure. Moreover, we relax one boundary condition, making it possible for the fluid to escape the granular media when compression of the granular mass occurs. Furthermore, we introduce second order terms in the equations making it possible to describe the evolution of the pore fluid pressure in response to the compression/dilatation of the granular mass without prescribing an extra ad-hoc equation for the pore pressure. We prove that the energy balance associated with this Jackson closure is dissipative, as well as its thin layer associated model. We present several numerical tests for the 1D case that are compared to the

  11. Evaluation of European air quality modelled by CAMx including the volatility basis set scheme

    NASA Astrophysics Data System (ADS)

    Ciarelli, Giancarlo; Aksoyoglu, Sebnem; Crippa, Monica; Jimenez, Jose-Luis; Nemitz, Eriko; Sellegri, Karine; Äijälä, Mikko; Carbone, Samara; Mohr, Claudia; O'Dowd, Colin; Poulain, Laurent; Baltensperger, Urs; Prévôt, André S. H.

    2016-08-01

    Four periods of EMEP (European Monitoring and Evaluation Programme) intensive measurement campaigns (June 2006, January 2007, September-October 2008 and February-March 2009) were modelled using the regional air quality model CAMx with VBS (volatility basis set) approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February-March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA). Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2) and ozone (O3) were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January-February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively). In contrast, nitrogen dioxide (NO2) and carbon monoxide (CO) were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from -2.1 to 1.0 µg m-3. Comparisons with AMS (aerosol mass spectrometer) measurements at different sites in Europe during February-March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February-March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing model and observations into better agreement

  12. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    SciTech Connect

    Ng, Jonathan; Huang, Yi-Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai

    2015-11-15

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Recently, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.

  13. Dusty Plasma Modeling of the Fusion Reactor Sheath Including Collisional-Radiative Effects

    SciTech Connect

    Dezairi, Aouatif; Samir, Mhamed; Eddahby, Mohamed; Saifaoui, Dennoun; Katsonis, Konstantinos; Berenguer, Chloe

    2008-09-07

    The structure and the behavior of the sheath in Tokamak collisional plasmas has been studied. The sheath is modeled taking into account the presence of the dust{sup 2} and the effects of the charged particle collisions and radiative processes. The latter may allow for optical diagnostics of the plasma.

  14. Improvement of subsurface process in land surface modeling including lateral flow under unsaturated zone

    NASA Astrophysics Data System (ADS)

    Kim, J.; Mohanty, B.

    2013-12-01

    Lateral subsurface flow is an important component in local water budgets through its direct impact on soil moisture. However, most of the land surface models are one-dimensional considering only vertical interactions and neglecting the horizontal flow of water at the grid or sub-grid scales. Subsurface flow can be affected by surface topography and non-homogenous soil properties controlling the lateral flow of water. In this study, we improved the subsurface flow process in land surface model (Community Land Model, CLM) by considering the lateral flow based on topography and heterogeneous soil hydraulic properties in unsaturated zone. The changes in flow direction derived from topographic factor are used to consider the lateral movement of water at the near surface. Furthermore, vertical and horizontal hydraulic conductivities for each layer in unsaturated zone are estimated using different averaging methods and anisotropic factors. Based on the hydraulic conductivities of each layer for heterogeneous soil profiles we considered lateral flow of soil water between soil columns. These approaches were tested at several different sites (e.g. field and watershed scales). The results showed the appropriate vertical and horizontal hydraulic conductivities with depth for each site and the improved subsurface flow process by considering the lateral flow in land surface models.

  15. LES studies of wind farms including wide turbine spacings and comparisons with the CWBL engineering model

    NASA Astrophysics Data System (ADS)

    Stevens, Richard; Gayme, Dennice; Meyers, Johan; Meneveau, Charles

    2015-11-01

    We present results from large eddy simulations (LES) of wind farms consisting of tens to hundreds of turbines with respective streamwise and spanwise spacings approaching 35 and 12 turbine diameters. Even in staggered farms where the distance between consecutive turbines in the flow direction is more than 50 turbine diameters, we observe visible wake effects. In aligned farms, the performance of the turbines in the fully developed regime, where the power output as function of the downstream position becomes constant, is shown to primarily depend on the streamwise distance between consecutive turbine rows. However, for other layouts the power production in the fully developed regime mainly depends on the geometrical mean turbine spacing (inverse turbine density). These findings agree very well with predictions from our recently developed coupled wake boundary layer (CWBL) model, which introduces a two way coupling between the wake (Jensen) and top-down model approaches (Stevens et al. JRSE 7, 023115, 2015). To further validate the CWBL model we apply it to the problem of determining the optimal wind turbine thrust coefficient for power maximization over the entire farm. The CWBL model predictions agree very well with recent LES results (Goit & Meyers, JFM 768, 5-50, 2015). FOM Fellowships for Young Energy Scientists (YES!), NSF (IIA 1243482, the WINDINSPIRE project), ERC (FP7-Ideas, 306471).

  16. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    SciTech Connect

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.

  17. Model of the western Laurentide Ice Sheet from glacio-isostatic adjustment analysis and revised margin locations

    NASA Astrophysics Data System (ADS)

    Gowan, E. J.; Tregoning, P.; Purcell, A.

    2013-12-01

    Uncertainties in ice sheet extent and thickness during the retreat of the western Laurentide Ice Sheet from the last glacial maximum affect estimates of its contribution to global climate and sea level change during the late Pleistocene and early Holocene. These difficulties arise due to a lack of chronological constraints on the timing of margin retreat in many areas and a lack of observations of the glacio-isostatic deformation due the ice sheet. We present a model of the western Laurentide ice sheet in North America based on new ice margin reconstructions and well dated glacial lake strandlines. The model of the Laurentide ice sheet is constructed based on the assumption of perfectly plastic, steady state conditions with temporally variable basal shear stress and margin location. Initial models of basal shear stress were based on modern surficial geology and geography, and adjusted in an iterative process to reflect the volume of ice needed to fit observations of earth deformation caused by the ice sheet. The ice margins were developed by determining the minimum timing of retreat and using that as a constraint on the absolute maximum possible ice margin location. By using the ice margin as the starting point of modelling, assumptions on the location of ice domes and saddles were avoided. Initial results of the modelling indicate that ice thickness remained below 1500 m throughout the Western Canadian Sedimentary Basin region at the last glacial maximum as a result of low basal shear stress. Modelled flow direction matches geomorphic ice flow indicators lending confidence to the glaciological model. Ice sheet margin retreat was limited until after 15,000 cal yr BP. The most significant ice volume losses happened after retreat from southern Alberta and after retreat began on the Canadian Shield.

  18. A population-competition model for analyzing transverse optical patterns including optical control and structural anisotropy

    NASA Astrophysics Data System (ADS)

    Tse, Y. C.; Chan, Chris K. P.; Luk, M. H.; Kwong, N. H.; Leung, P. T.; Binder, R.; Schumacher, Stefan

    2015-08-01

    We present a detailed study of a low-dimensional population-competition (PC) model suitable for analysis of the dynamics of certain modulational instability patterns in extended systems. The model is applied to analyze the transverse optical exciton-polariton patterns in semiconductor quantum well microcavities. It is shown that, despite its simplicity, the PC model describes quite well the competitions among various two-spot and hexagonal patterns when four physical parameters, representing density saturation, hexagon stabilization, anisotropy, and switching beam intensity, are varied. The combined effects of the last three parameters are given detailed considerations here. Although the model is developed in the context of semiconductor polariton patterns, its equations have more general applicability, and the results obtained here may benefit the investigation of other pattern-forming systems. The simplicity of the PC model allows us to organize all steady state solutions in a parameter space ‘phase diagram’. Each region in the phase diagram is characterized by the number and type of solutions. The main numerical task is to compute inter-region boundary surfaces, where some steady states either appear, disappear, or change their stability status. The singularity types of the boundary points, given by Catastrophe theory, are shown to provide a simple geometric overview of the boundary surfaces. With all stable and unstable steady states and the phase boundaries delimited and characterized, we have attained a comprehensive understanding of the structure of the four-parameter phase diagram. We analyze this rich structure in detail and show that it provides a transparent and organized interpretation of competitions among various patterns built on the hexagonal state space.

  19. A catchment-scale groundwater model including sewer pipe leakage in an urban system

    NASA Astrophysics Data System (ADS)

    Peche, Aaron; Fuchs, Lothar; Spönemann, Peter; Graf, Thomas; Neuweiler, Insa

    2016-04-01

    Keywords: pipe leakage, urban hydrogeology, catchment scale, OpenGeoSys, HYSTEM-EXTRAN Wastewater leakage from subsurface sewer pipe defects leads to contamination of the surrounding soil and groundwater (Ellis, 2002; Wolf et al., 2004). Leakage rates at pipe defects have to be known in order to quantify contaminant input. Due to inaccessibility of subsurface pipe defects, direct (in-situ) measurements of leakage rates are tedious and associated with a high degree of uncertainty (Wolf, 2006). Proposed catchment-scale models simplify leakage rates by neglecting unsaturated zone flow or by reducing spatial dimensions (Karpf & Krebs, 2013, Boukhemacha et al., 2015). In the present study, we present a physically based 3-dimensional numerical model incorporating flow in the pipe network, in the saturated zone and in the unsaturated zone to quantify leakage rates on the catchment scale. The model consists of the pipe network flow model HYSTEM-EXTAN (itwh, 2002), which is coupled to the subsurface flow model OpenGeoSys (Kolditz et al., 2012). We also present the newly developed coupling scheme between the two flow models. Leakage functions specific to a pipe defect are derived from simulations of pipe leakage using spatially refined grids around pipe defects. In order to minimize computational effort, these leakage functions are built into the presented numerical model using unrefined grids around pipe defects. The resulting coupled model is capable of efficiently simulating spatially distributed pipe leakage coupled with subsurficial water flow in a 3-dimensional environment. References: Boukhemacha, M. A., Gogu, C. R., Serpescu, I., Gaitanaru, D., & Bica, I. (2015). A hydrogeological conceptual approach to study urban groundwater flow in Bucharest city, Romania. Hydrogeology Journal, 23(3), 437-450. doi:10.1007/s10040-014-1220-3. Ellis, J. B., & Revitt, D. M. (2002). Sewer losses and interactions with groundwater quality. Water Science and Technology, 45(3), 195

  20. Micromagnetic model for studies on Magnetic Tunnel Junction switching dynamics, including local current density

    NASA Astrophysics Data System (ADS)

    Frankowski, Marek; Czapkiewicz, Maciej; Skowroński, Witold; Stobiecki, Tomasz

    2014-02-01

    We present a model introducing the Landau-Lifshitz-Gilbert equation with a Slonczewski's Spin-Transfer-Torque (STT) component in order to take into account spin polarized current influence on the magnetization dynamics, which was developed as an Object Oriented MicroMagnetic Framework extension. We implement the following computations: magnetoresistance of vertical channels is calculated from the local spin arrangement, local current density is used to calculate the in-plane and perpendicular STT components as well as the Oersted field, which is caused by the vertical current flow. The model allows for an analysis of all listed components separately, therefore, the contribution of each physical phenomenon in dynamic behavior of Magnetic Tunnel Junction (MTJ) magnetization is discussed. The simulated switching voltage is compared with the experimental data measured in MTJ nanopillars.

  1. A finite element model for wave propagation in an inhomogeneous material including experimental validation

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Dahl, Milo D.

    1987-01-01

    A finite element model was developed to solve for the acoustic pressure field in a nonhomogeneous region. The derivations from the governing equations assumed that the material properties could vary with position resulting in a nonhomogeneous variable property two-dimensional wave equation. This eliminated the necessity of finding the boundary conditions between the different materials. For a two media region consisting of part air (in the duct) and part bulk absorber (in the wall), a model was used to describe the bulk absorber properties in two directions. An experiment to verify the numerical theory was conducted in a rectangular duct with no flow and absorbing material mounted on one wall. Changes in the sound field, consisting of planar waves was measured on the wall opposite the absorbing material. As a function of distance along the duct, fairly good agreement was found in the standing wave pattern upstream of the absorber and in the decay of pressure level opposite the absorber.

  2. A model for thermal oxidation of Si and SiC including material expansion

    NASA Astrophysics Data System (ADS)

    Christen, T.; Ioannidis, A.; Winkelmann, C.

    2015-02-01

    A model based on drift-diffusion-reaction kinetics for Si and SiC oxidation is discussed, which takes the material expansion into account with an additional convection term. The associated velocity field is determined self-consistently from the local reaction rate. The approach allows a calculation of the densities of volatile species in an nm-resolution at the oxidation front. The model is illustrated with simulation results for the growth and impurity redistribution during Si oxidation and for carbon and silicon emission during SiC oxidation. The approach can be useful for the prediction of Si and/or C interstitial distribution, which is particularly relevant for the quality of metal-oxide-semiconductor electronic devices.

  3. A finite element model for wave propagation in an inhomogeneous material including experimental validation

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Dahl, Milo D.

    1987-01-01

    A finite element model was developed to solve for the acoustic pressure field in a nonhomogeneous region. The derivations from the governing equations assumed that the material properties could vary with position resulting in a nonhomogeneous variable property two-dimensional wave equation. This eliminated the necessity of finding the boundary conditions between the different materials. For a two media region consisting of part air (in the duct) and part bulk absorber (in the wall), a model was used to describe the bulk absorber properties in two directions. An experiment to verify the numerical theory was conducted in a rectangular duct with no flow and absorbing material mounted on one wall. Changes in the sound field, consisting of planar waves, was measured on the wall opposite the absorbing material. As a function of distance along the duct, fairly good agreement was found in the standing wave pattern upstream of the absorber and in the decay of pressure level opposite the absorber.

  4. Ion-biomolecule collisions studied within the independent atom model including geometric screening corrections

    NASA Astrophysics Data System (ADS)

    Lüdde, H. J.; Achenbach, A.; Kalkbrenner, T.; Jankowiak, H. C.; Kirchner, T.

    2016-05-01

    A recently introduced model to account for geometric screening corrections in an independent-atom-model description of ion-molecule collisions is applied to proton collisions from amino acids and DNA and RNA nucleobases. The correction coefficients are obtained from using a pixel counting method (PCM) for the exact calculation of the effective cross sectional area that emerges when the molecular cross section is pictured as a structure of (overlapping) atomic cross sections. This structure varies with the relative orientation of the molecule with respect to the projectile beam direction and, accordingly, orientation-independent total cross sections are obtained from averaging the pixel count over many orientations. We present net capture and net ionization cross sections over wide ranges of impact energy and analyze the strength of the screening effect by comparing the PCM results with Bragg additivity rule cross sections and with experimental data where available. Work supported by NSERC, Canada.

  5. A model for thermal oxidation of Si and SiC including material expansion

    SciTech Connect

    Christen, T. Ioannidis, A.; Winkelmann, C.

    2015-02-28

    A model based on drift-diffusion-reaction kinetics for Si and SiC oxidation is discussed, which takes the material expansion into account with an additional convection term. The associated velocity field is determined self-consistently from the local reaction rate. The approach allows a calculation of the densities of volatile species in an nm-resolution at the oxidation front. The model is illustrated with simulation results for the growth and impurity redistribution during Si oxidation and for carbon and silicon emission during SiC oxidation. The approach can be useful for the prediction of Si and/or C interstitial distribution, which is particularly relevant for the quality of metal-oxide-semiconductor electronic devices.

  6. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  7. Redefining the maximum sustainable yield for the Schaefer population model including multiplicative environmental noise.

    PubMed

    Bousquet, Nicolas; Duchesne, Thierry; Rivest, Louis-Paul

    2008-09-01

    The focus of this article is to investigate the biological reference points, such as the maximum sustainable yield (MSY), in a common Schaefer (logistic) surplus production model in the presence of a multiplicative environmental noise. This type of model is used in fisheries stock assessment as a first-hand tool for biomass modelling. Under the assumption that catches are proportional to the biomass, we derive new conditions on the environmental noise distribution such that stationarity exists and extinction is avoided. We then get new explicit results about the stationary behavior of the biomass distribution for a particular specification of the noise, namely the biomass distribution itself and a redefinition of the MSY and related quantities that now depend on the value of the variance of the noise. Consequently, we obtain a more precise vision of how less optimistic the stochastic version of the MSY can be than the traditionally used (deterministic) MSY. In addition, we give empirical conditions on the error variance to approximate our specific noise by a lognormal noise, the latter being more natural and leading to easier inference in this context. These conditions are mild enough to make the explicit results of this paper valid in a number of practical applications. The outcomes of two case-studies about northwest Atlantic haddock [Spencer, P.D., Collie, J.S., 1997. Effect of nonlinear predation rates on rebuilding the Georges Bank haddock (Melanogrammus aeglefinus) stock. Can. J. Fish. Aquat. Sci. 54, 2920-2929] and South Atlantic albacore tuna [Millar, R.B., Meyer, R., 2000. Non-linear state space modelling of fisheries biomass dynamics by using Metropolis-Hastings within-Gibbs sampling. Appl. Stat. 49, 327-342] are used to illustrate the impact of our results in bioeconomic terms.

  8. A quark model calculation of yy->pipi including final-state interactions

    SciTech Connect

    H.G. Blundell; S. Godfrey; G. Hay; Eric Swanson

    2000-02-01

    A quark model calculation of the processes yy->pi+pi- and yy->pipi is performed. At tree level, only charged pions couple to the initial state photons and neutral pions are not exceeded in the final state. However a small but significant cross section is observed. We demonstrate that this may be accounted for by a rotation in isospin space induced by final-state interactions.

  9. Numerical modelling of the transport of trace gases including methane in the subsurface of Mars

    NASA Astrophysics Data System (ADS)

    Stevens, Adam H.; Patel, Manish R.; Lewis, Stephen R.

    2015-04-01

    We model the transport of gas through the martian subsurface in order to quantify the timescales of release of a trace gas with a source at depth using a Fickian model of diffusion through a putative martian regolith column. The model is then applied to the case of methane to determine if diffusive transport of gas can explain previous observations of methane in the martian atmosphere. We investigate which parameters in the model have the greatest effect on transport timescales and show that the calculated diffusivity is very sensitive to the pressure profile of the subsurface, but relatively insensitive to the temperature profile, though diffusive transport may be affected by other temperature dependent properties of the subsurface such as the local vapour pressure. Uncertainties in the structure and physical conditions of the martian subsurface also introduce uncertainties in the timescales calculated. It was found that methane may take several hundred thousand Mars-years to diffuse from a source at depth. Purely diffusive transport cannot explain transient release that varies on timescales of less than one martian year from sources such as serpentinization or methanogenic organisms at depths of more than 2 km. However, diffusion of gas released by the destabilisation of methane clathrate hydrates close to the surface, for example caused by transient mass wasting events or erosion, could produce a rapidly varying flux of methane into the atmosphere of more than 10-3 kg m-2 s-1 over a duration of less than half a martian year, consistent with observations of martian methane variability. Seismic events, magmatic intrusions or impacts could also potentially produce similar patterns of release, but are far more complex to simulate.

  10. Modeling grain size adjustments in the downstream reach following run-of-river development

    NASA Astrophysics Data System (ADS)

    Fuller, Theodore K.; Venditti, Jeremy G.; Nelson, Peter A.; Palen, Wendy J.

    2016-04-01

    Disruptions to sediment supply continuity caused by run-of-river (RoR) hydropower development have the potential to cause downstream changes in surface sediment grain size which can influence the productivity of salmon habitat. The most common approach to understanding the impacts of RoR hydropower is to study channel changes in the years following project development, but by then, any impacts are manifest and difficult to reverse. Here we use a more proactive approach, focused on predicting impacts in the project planning stage. We use a one-dimensional morphodynamic model to test the hypothesis that the greatest risk of geomorphic change and impact to salmon habitat from a temporary sediment supply disruption exists where predevelopment sediment supply is high and project design creates substantial sediment storage volume. We focus on the potential impacts in the reach downstream of a powerhouse for a range of development scenarios that are typical of projects developed in the Pacific Northwest and British Columbia. Results indicate that increases in the median bed surface size (D50) are minor if development occurs on low sediment supply streams (<1 mm for supply rates 1 × 10-5 m2 s-1 or lower), and substantial for development on high sediment supply streams (8-30 mm for supply rates between 5.5 × 10-4 and 1 × 10-3 m2 s-1). However, high sediment supply streams recover rapidly to the predevelopment surface D50 (˜1 year) if sediment supply can be reestablished.

  11. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  12. A stepped leader model for lightning including charge distribution in branched channels

    SciTech Connect

    Shi, Wei; Zhang, Li; Li, Qingmin

    2014-09-14

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  13. Modelling topical photodynamic therapy treatment including the continuous production of Protoporphyrin IX

    NASA Astrophysics Data System (ADS)

    Campbell, C. L.; Brown, C. T. A.; Wood, K.; Moseley, H.

    2016-11-01

    Most existing theoretical models of photodynamic therapy (PDT) assume a uniform initial distribution of the photosensitive molecule, Protoporphyrin IX (PpIX). This is an adequate assumption when the prodrug is systematically administered; however for topical PDT this is no longer a valid assumption. Topical application and subsequent diffusion of the prodrug results in an inhomogeneous distribution of PpIX, especially after short incubation times, prior to light illumination. In this work a theoretical simulation of PDT where the PpIX distribution depends on the incubation time and the treatment modality is described. Three steps of the PpIX production are considered. The first is the distribution of the topically applied prodrug, the second in the conversion from the prodrug to PpIX and the third is the light distribution which affects the PpIX distribution through photobleaching. The light distribution is modelled using a Monte Carlo radiation transfer model and indicates treatment depths of around 2 mm during daylight PDT and approximately 3 mm during conventional PDT. The results suggest that treatment depths are not only limited by the light penetration but also by the PpIX distribution.

  14. Analytic band Monte Carlo model for electron transport in Si including acoustic and optical phonon dispersion

    NASA Astrophysics Data System (ADS)

    Pop, Eric; Dutton, Robert W.; Goodson, Kenneth E.

    2004-11-01

    We describe the implementation of a Monte Carlo model for electron transport in silicon. The model uses analytic, nonparabolic electron energy bands, which are computationally efficient and sufficiently accurate for future low-voltage (<1V) nanoscale device applications. The electron-lattice scattering is incorporated using an isotropic, analytic phonon-dispersion model, which distinguishes between the optical/acoustic and the longitudinal/transverse phonon branches. We show that this approach avoids introducing unphysical thresholds in the electron distribution function, and that it has further applications in computing detailed phonon generation spectra from Joule heating. A set of deformation potentials for electron-phonon scattering is introduced and shown to yield accurate transport simulations in bulk silicon across a wide range of electric fields and temperatures. The shear deformation potential is empirically determined at Ξu=6.8eV, and consequently, the isotropically averaged scattering potentials with longitudinal and transverse acoustic phonons are DLA=6.39eV and DTA=3.01eV, respectively, in reasonable agreement with previous studies. The room-temperature electron mobility in strained silicon is also computed and shown to be in better agreement with the most recent phonon-limited data available. As a result, we find that electron coupling with g-type phonons is about 40% lower, and the coupling with f-type phonons is almost twice as strong as previously reported.

  15. Numerical Modeling of the Surface Fatigue Crack Propagation Including the Closure Effect

    NASA Astrophysics Data System (ADS)

    Guchinsky, Ruslan; Petinov, Sergei

    2016-01-01

    Presently modeling of surface fatigue crack growth for residual life assessment of structural elements is almost entirely based on application of the Linear Elastic Fracture Mechanics (LEFM). Generally, it is assumed that the crack front does not essentially change its shape, although it is not always confirmed by experiment. Furthermore, LEFM approach cannot be applied when the stress singularity vanishes due to material plasticity, one of the leading factors associated with the material degradation and fracture. Also, evaluation of stress intensity factors meets difficulties associated with changes in the stress state along the crack front circumference. An approach proposed for simulation the evolution of surface cracks based on application of the Strain-life criterion for fatigue failure and of the finite element modeling of damage accumulation. It takes into account the crack closure effect, the nonlinear behavior of damage accumulation and material compliance increasing due to the damage advance. The damage accumulation technique was applied to model the semi-elliptical crack growth from the initial defect in the steel compact specimen. The results of simulation are in good agreement with the published experimental data.

  16. Statistical method for sparse coding of speech including a linear predictive model

    NASA Astrophysics Data System (ADS)

    Rufiner, Hugo L.; Goddard, John; Rocha, Luis F.; Torres, María E.

    2006-07-01

    Recently, different methods for obtaining sparse representations of a signal using dictionaries of waveforms have been studied. They are often motivated by the way the brain seems to process certain sensory signals. Algorithms have been developed using a specific criterion to choose the waveforms occurring in the representation. The waveforms are choosen from a fixed dictionary and some algorithms also construct them as a part of the method. In the case of speech signals, most approaches do not take into consideration the important temporal correlations that are exhibited. It is known that these correlations are well approximated by linear models. Incorporating this a priori knowledge of the signal can facilitate the search for a suitable representation solution and also can help with its interpretation. Lewicki proposed a method to solve the noisy and overcomplete independent component analysis problem. In the present paper we propose a modification of this statistical technique for obtaining a sparse representation using a generative parametric model. The representations obtained with the method proposed here and other techniques are applied to artificial data and real speech signals, and compared using different coding costs and sparsity measures. The results show that the proposed method achieves more efficient representations of these signals compared to the others. A qualitative analysis of these results is also presented, which suggests that the restriction imposed by the parametric model is helpful in discovering meaningful characteristics of the signals.

  17. Modeling ozone removal to indoor materials, including the effects of porosity, pore diameter, and thickness.

    PubMed

    Gall, Elliott T; Siegel, Jeffrey A; Corsi, Richard L

    2015-04-01

    We develop an ozone transport and reaction model to determine reaction probabilities and assess the importance of physical properties such as porosity, pore diameter, and material thickness on reactive uptake of ozone to five materials. The one-dimensional model accounts for molecular diffusion from bulk air to the air-material interface, reaction at the interface, and diffusive transport and reaction through material pore volumes. Material-ozone reaction probabilities that account for internal transport and internal pore area, γ(ipa), are determined by a minimization of residuals between predicted and experimentally derived ozone concentrations. Values of γ(ipa) are generally less than effective reaction probabilities (γ(eff)) determined previously, likely because of the inclusion of diffusion into substrates and reaction with internal surface area (rather than the use of the horizontally projected external material areas). Estimates of γ(ipa) average 1 × 10(-7), 2 × 10(-7), 4 × 10(-5), 2 × 10(-5), and 4 × 10(-7) for two types of cellulose paper, pervious pavement, Portland cement concrete, and an activated carbon cloth, respectively. The transport and reaction model developed here accounts for observed differences in ozone removal to varying thicknesses of the cellulose paper, and estimates a near constant γ(ipa) as material thickness increases from 0.02 to 0.16 cm.

  18. Including dislocation flux in a continuum crystal plasticity model to produce size scale effects

    SciTech Connect

    Becker, R; Arsenlis, A; Bulatov, V V; Parks, D M

    2004-02-13

    A novel model has been developed to capture size scale and gradient effects within the context of continuum crystal plasticity by explicitly incorporating details of dislocation transport, coupling dislocation transport to slip, evolving spatial distributions of dislocations consistent with the flux, and capturing the interactions among various dislocation populations. Dislocation flux and density are treated as nodal degrees of freedom in the finite element model, and they are determined as part of the global system of equations. The creation, annihilation and flux of dislocations between elements are related by transport equations. Crystallographic slip is coupled to the dislocation flux and the stress state. The resultant gradients in dislocation density and local lattice rotations are analyzed for geometrically necessary and statistically stored dislocation contents that contribute to strength and hardening. Grain boundaries are treated as surfaces where dislocation flux is restricted depending on the relative orientations of the neighboring grains. Numerical results show different behavior near free surfaces and non-deforming surfaces resulting from differing levels of dislocation transmission. Simulations also show development of dislocation pile-ups at grain boundaries and an increase in flow strength reminiscent of the Hall-Petch model. The dislocation patterns have a characteristic size independent of the numerical discretization.

  19. Climate change impact modelling needs to include cross-sectoral interactions

    NASA Astrophysics Data System (ADS)

    Harrison, Paula A.; Dunford, Robert W.; Holman, Ian P.; Rounsevell, Mark D. A.

    2016-09-01

    Climate change impact assessments often apply models of individual sectors such as agriculture, forestry and water use without considering interactions between these sectors. This is likely to lead to misrepresentation of impacts, and consequently to poor decisions about climate adaptation. However, no published research assesses the differences between impacts simulated by single-sector and integrated models. Here we compare 14 indicators derived from a set of impact models run within single-sector and integrated frameworks across a range of climate and socio-economic scenarios in Europe. We show that single-sector studies misrepresent the spatial pattern, direction and magnitude of most impacts because they omit the complex interdependencies within human and environmental systems. The discrepancies are particularly pronounced for indicators such as food production and water exploitation, which are highly influenced by other sectors through changes in demand, land suitability and resource competition. Furthermore, the discrepancies are greater under different socio-economic scenarios than different climate scenarios, and at the sub-regional rather than Europe-wide scale.

  20. Kinetic model of water disinfection using peracetic acid including synergistic effects.

    PubMed

    Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D

    2016-01-01

    The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors. PMID:26819382

  1. Parametric reduced-order models of battery pack vibration including structural variation and prestress effects

    NASA Astrophysics Data System (ADS)

    Hong, Sung-Kwon; Epureanu, Bogdan I.; Castanier, Matthew P.

    2014-09-01

    The goal of this work is to develop a numerical model for the vibration of hybrid electric vehicle (HEV) battery packs to enable probabilistic forced response simulations for the effects of variations. There are two important types of variations that affect their structural response significantly: the prestress that is applied when joining the cells within a pack; and the small, random structural property discrepancies among the cells of a battery pack. The main contributions of this work are summarized as follows. In order to account for these two important variations, a new parametric reduced order model (PROM) formulation is derived by employing three key observations: (1) the stiffness matrix can be parameterized for different levels of prestress, (2) the mode shapes of a battery pack with cell-to-cell variation can be represented as a linear combination of the mode shapes of the nominal system, and (3) the frame holding each cell has vibratory motion. A numerical example of an academic battery pack with pouch cells is presented to demonstrate that the PROM captures the effects of both prestress and structural variation on battery packs. The PROM is validated numerically by comparing full-order finite element models (FEMs) of the same systems.

  2. Comparison of lead isotopes with source apportionment models, including SOM, for air particulates.

    PubMed

    Gulson, Brian; Korsch, Michael; Dickson, Bruce; Cohen, David; Mizon, Karen; Davis, J Michael

    2007-08-01

    We have measured high precision lead isotopes in PM(2.5) particulates from a highly-trafficked site (Mascot) and rural site (Richmond) in the Sydney Basin, New South Wales, Australia to compare with isotopic data from total suspended particulates (TSP) from other sites in the Sydney Basin and evaluate relationships with source fingerprints obtained from multi-element PM(2.5) data. The isotopic data for the period 1998 to 2004 show seasonal peaks and troughs that are more pronounced in the rural site for the PM(2.5).samples but are consistent with the TSP. The Self Organising Map (SOM) method has been applied to the multi-element PM(2.5) data to evaluate its use in obtaining fingerprints for comparison with standard statistical procedures (ANSTO model). As seasonal effects are also significant for the multi-element data, the SOM modelling is reported as site and season dependent. At the Mascot site, the ANSTO model exhibits decreasing (206)Pb/(204)Pb ratios with increasing contributions of fingerprints for "secondary smoke" (industry), "soil", "smoke" and "seaspray". Similar patterns were shown by SOM winter fingerprints for both sites. At the rural site, there are large isotopic variations but for the majority of samples these are not associated with increased contributions from the main sources with the ANSTO model. For two winter sampling times, there are increased contributions from "secondary industry", "smoke", "soil" and seaspray with one time having a source or sources of Pb similar to that of Mascot. The only positive relationship between increasing (206)Pb/(204)Pb ratio and source contributions is found at the rural site using the SOM summer fingerprints, both of which show a significant contribution from sulphur. Several of the fingerprints using either model have significant contributions from black carbon (BC) and/or sulphur (S) that probably derive from diesel fuels and industrial sources. Increased contributions from sources with the SOM summer

  3. A Variational Inverse Model Study of Amazonian Methane Emissions including Observations from the AMAZONICA campaign

    NASA Astrophysics Data System (ADS)

    Wilson, C. J.; Gloor, M.; Chipperfield, M.; Miller, J. B.; Gatti, L.

    2013-12-01

    Methane (CH4) is a greenhouse gas which is emitted from a range of anthropogenic and natural sources, and since the industrial revolution its mean atmospheric concentration has climbed dramatically, reaching values unprecedented in at least the past 650,000 years. CH4 produces a relatively high radiative forcing effect upon the Earth's climate, and its atmospheric lifetime of approximately 10 years makes it a more appealing target for the mitigation of climate change over short timescales than long-lived greenhouse gases such as carbon dioxide. However, the spatial and temporal variation of CH4 emissions are still not well understood, though in recent years a number of top-down and bottom-up studies have attempted to construct improved emission budgets. Some top-down studies may suffer from poor observational coverage in tropical regions, however, especially in the planetary boundary layer, where the atmosphere is highly sensitive to emissions. For example, although satellite observations often take a large volume of measurements in tropical regions, these retrievals are not usually sensitive to concentrations at the planet's surface. Methane emissions from Amazon region, in particular, are often poorly constrained. Since emissions form this region, coming mainly from wetland and biomass burning sources, are thought to be relatively high, additional observations in this region would greatly help to constrain the geographical distribution of the global CH4 emission budget. In order to provide such measurements, the AMAZONICA project began to take regular flask measurements of CH4 and other trace gases from aircraft over four Amazonian sites from the year 2010 onwards. We first present a forward modelling study of these observations of Amazonian methane for the year 2010 using the TOMCAT Chemical Transport Model. The model is used to attribute variations at each site to a source type and region, and also to assess the ability of our current CH4 flux estimates to

  4. Improving Public Health DSSs by Including Saharan Dust Forecasts Through Incorporation of NASA's GOCART Model Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith

    2007-01-01

    Approximately 2-3 billion metric tons of soil dust are estimated to be transported in the Earth's atmosphere each year. Global transport of desert dust is believed to play an important role in many geochemical, climatological, and environmental processes. This dust carries minerals and nutrients, but it has also been shown to carry pollutants and viable microorganisms capable of harming human, animal, plant, and ecosystem health. Saharan dust, which impacts the eastern United States (especially Florida and the southeast) and U.S. Territories in the Caribbean primarily during the summer months, has been linked to increases in respiratory illnesses in this region and has been shown to carry other human, animal, and plant pathogens. For these reasons, this candidate solution recommends integrating Saharan dust distribution and concentration forecasts from the NASA GOCART global dust cycle model into a public health DSS (decision support system), such as the CDC's (Centers for Disease Control and Prevention's) EPHTN (Environmental Public Health Tracking Network), for the eastern United States and Caribbean for early warning purposes regarding potential increases in respiratory illnesses or asthma attacks, potential disease outbreaks, or bioterrorism. This candidate solution pertains to the Public Health National Application but also has direct connections to Air Quality and Homeland Security. In addition, the GOCART model currently uses the NASA MODIS aerosol product as an input and uses meteorological forecasts from the NASA GEOS-DAS (Goddard Earth Observing System Data Assimilation System) GEOS-4 AGCM. In the future, VIIRS aerosol products and perhaps CALIOP aerosol products could be assimilated into the GOCART model to improve the results.

  5. Extending Galactic Habitable Zone Modeling to Include the Emergence of Intelligent Life.

    PubMed

    Morrison, Ian S; Gowanlock, Michael G

    2015-08-01

    Previous studies of the galactic habitable zone have been concerned with identifying those regions of the Galaxy that may favor the emergence of complex life. A planet is deemed habitable if it meets a set of assumed criteria for supporting the emergence of such complex life. In this work, we extend the assessment of habitability to consider the potential for life to further evolve to the point of intelligence--termed the propensity for the emergence of intelligent life, φI. We assume φI is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the sterilizing effects of nearby supernovae. The times between supernova events provide windows of opportunity for the evolution of intelligence. We developed a model that allows us to analyze these window times to generate a metric for φI, and we examine here the spatial and temporal variation of this metric. Even under the assumption that long time durations are required between sterilizations to allow for the emergence of intelligence, our model suggests that the inner Galaxy provides the greatest number of opportunities for intelligence to arise. This is due to the substantially higher number density of habitable planets in this region, which outweighs the effects of a higher supernova rate in the region. Our model also shows that φI is increasing with time. Intelligent life emerged at approximately the present time at Earth's galactocentric radius, but a similar level of evolutionary opportunity was available in the inner Galaxy more than 2 Gyr ago. Our findings suggest that the inner Galaxy should logically be a prime target region for searches for extraterrestrial intelligence and that any civilizations that may have emerged there are potentially much older than our own.

  6. Areal Rainfall Estimation Using Moving Cars - Computer Experiments Including Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Rabiei, E.; Haberlandt, U.; Sester, M.; Fitzner, D.; Wallner, M.

    2015-12-01

    The benefit of using fine temporal and spatial rainfall data resolution can be significant for hydrological modeling especially for small scale applications (e.g. urban hydrology). It has been observed by Rabiei et al. (2013) that moving cars can be a possible new source of data when used for measuring rainfall amount (RainCars). The optical sensors operating the windscreen wipers showed the potential of being used for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. The main objective of this study is to investigate the benefit of using RainCars for estimating areal rainfall when these errors are considered explicitly. To this end, radar rainfall is considered as the reference and the other sources of data, i.e. RainCars and pseudo stations, are extracted from radar data. The goal is to compare the areal rainfall estimation by RainCars with pseudo stations and reference data. The value of the additional data is not only assessed for areal rainfall estimation performance, but also using hydrological modeling. In fact, the reference data simulates the reference discharge. The other sources of data also simulate the discharge that is to be compared with the reference discharge. The results show, that the RainCars provide useful additional information for areal rainfall estimation and hydrological modelling also if their measurement uncertainty is quite high. Rabiei, E., Haberlandt, U., Sester, M., Fitzner, D., 2013. Rainfall estimation using moving cars as rain gauges – laboratory experiments. Hydrol. Earth Syst. Sci., 17(11): 4701-4712.

  7. Individual welfare maximization in electricity markets including consumer and full transmission system modeling

    NASA Astrophysics Data System (ADS)

    Weber, James Daniel

    1999-11-01

    This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is

  8. Extending Galactic Habitable Zone Modeling to Include the Emergence of Intelligent Life.

    PubMed

    Morrison, Ian S; Gowanlock, Michael G

    2015-08-01

    Previous studies of the galactic habitable zone have been concerned with identifying those regions of the Galaxy that may favor the emergence of complex life. A planet is deemed habitable if it meets a set of assumed criteria for supporting the emergence of such complex life. In this work, we extend the assessment of habitability to consider the potential for life to further evolve to the point of intelligence--termed the propensity for the emergence of intelligent life, φI. We assume φI is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the sterilizing effects of nearby supernovae. The times between supernova events provide windows of opportunity for the evolution of intelligence. We developed a model that allows us to analyze these window times to generate a metric for φI, and we examine here the spatial and temporal variation of this metric. Even under the assumption that long time durations are required between sterilizations to allow for the emergence of intelligence, our model suggests that the inner Galaxy provides the greatest number of opportunities for intelligence to arise. This is due to the substantially higher number density of habitable planets in this region, which outweighs the effects of a higher supernova rate in the region. Our model also shows that φI is increasing with time. Intelligent life emerged at approximately the present time at Earth's galactocentric radius, but a similar level of evolutionary opportunity was available in the inner Galaxy more than 2 Gyr ago. Our findings suggest that the inner Galaxy should logically be a prime target region for searches for extraterrestrial intelligence and that any civilizations that may have emerged there are potentially much older than our own. PMID:26274865

  9. Double pendulum model for a tennis stroke including a collision process

    NASA Astrophysics Data System (ADS)

    Youn, Sun-Hyun

    2015-10-01

    By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.

  10. A mixing length model for the aqueous boundary layer including the effect of wave breaking on enhancing gas transfer

    NASA Astrophysics Data System (ADS)

    Donelan, M. A.; Soloviev, A. V.

    2016-05-01

    A mixing length model for air-water gas transfer is developed to include the effects of wave breaking. The model requires both the shear velocity induced by the wind and the integrated wave dissipation. Both of these can be calculated for tanks and oceans by a full spectrum wave model. The gas transfer model is calibrated, with laboratory tank measurements of carbon dioxide flux, and transported to oceanic conditions to yield air-sea transfer velocity versus wind speed.

  11. Analytical model for tilting proprotor aircraft dynamics, including blade torsion and coupled bending modes, and conversion mode operation

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1974-01-01

    An analytical model is developed for proprotor aircraft dynamics. The rotor model includes coupled flap-lag bending modes, and blade torsion degrees of freedom. The rotor aerodynamic model is generally valid for high and low inflow, and for axial and nonaxial flight. For the rotor support, a cantilever wing is considered; incorporation of a more general support with this rotor model will be a straight-forward matter.

  12. THREE-DIMENSIONAL MAGNETOHYDRODYNAMIC MODELING OF THE SOLAR WIND INCLUDING PICKUP PROTONS AND TURBULENCE TRANSPORT

    SciTech Connect

    Usmanov, Arcadi V.; Matthaeus, William H.; Goldstein, Melvyn L.

    2012-07-20

    To study the effects of interstellar pickup protons and turbulence on the structure and dynamics of the solar wind, we have developed a fully three-dimensional magnetohydrodynamic solar wind model that treats interstellar pickup protons as a separate fluid and incorporates the transport of turbulence and turbulent heating. The governing system of equations combines the mean-field equations for the solar wind plasma, magnetic field, and pickup protons and the turbulence transport equations for the turbulent energy, normalized cross-helicity, and correlation length. The model equations account for photoionization of interstellar hydrogen atoms and their charge exchange with solar wind protons, energy transfer from pickup protons to solar wind protons, and plasma heating by turbulent dissipation. Separate mass and energy equations are used for the solar wind and pickup protons, though a single momentum equation is employed under the assumption that the pickup protons are comoving with the solar wind protons. We compute the global structure of the solar wind plasma, magnetic field, and turbulence in the region from 0.3 to 100 AU for a source magnetic dipole on the Sun tilted by 0 Degree-Sign -90 Degree-Sign and compare our results with Voyager 2 observations. The results computed with and without pickup protons are superposed to evaluate quantitatively the deceleration and heating effects of pickup protons, the overall compression of the magnetic field in the outer heliosphere caused by deceleration, and the weakening of corotating interaction regions by the thermal pressure of pickup protons.

  13. Interpretation of thermoreflectance measurements with a two-temperature model including non-surface heat deposition

    NASA Astrophysics Data System (ADS)

    Regner, K. T.; Wei, L. C.; Malen, J. A.

    2015-12-01

    We develop a solution to the two-temperature diffusion equation in axisymmetric cylindrical coordinates to model heat transport in thermoreflectance experiments. Our solution builds upon prior solutions that account for two-channel diffusion in each layer of an N-layered geometry, but adds the ability to deposit heat at any location within each layer. We use this solution to account for non-surface heating in the transducer layer of thermoreflectance experiments that challenge the timescales of electron-phonon coupling. A sensitivity analysis is performed to identify important parameters in the solution and to establish a guideline for when to use the two-temperature model to interpret thermoreflectance data. We then fit broadband frequency domain thermoreflectance (BB-FDTR) measurements of SiO2 and platinum at a temperature of 300 K with our two-temperature solution to parameterize the gold/chromium transducer layer. We then refit BB-FDTR measurements of silicon and find that accounting for non-equilibrium between electrons and phonons in the gold layer does lessen the previously observed heating frequency dependence reported in Regner et al. [Nat. Commun. 4, 1640 (2013)] but does not completely eliminate it. We perform BB-FDTR experiments on silicon with an aluminum transducer and find limited heating frequency dependence, in agreement with time domain thermoreflectance results. We hypothesize that the discrepancy between thermoreflectance measurements with different transducers results in part from spectrally dependent phonon transmission at the transducer/silicon interface.

  14. Simulation of the contraction of the ventricles in a human heart model including atria and pericardium.

    PubMed

    Fritz, Thomas; Wieners, Christian; Seemann, Gunnar; Steen, Henning; Dössel, Olaf

    2014-06-01

    During the contraction of the ventricles, the ventricles interact with the atria as well as with the pericardium and the surrounding tissue in which the heart is embedded. The atria are stretched, and the atrioventricular plane moves toward the apex. The atrioventricular plane displacement (AVPD) is considered to be a major contributor to the ventricular function, and a reduced AVPD is strongly related to heart failure. At the same time, the epicardium slides almost frictionlessly on the pericardium with permanent contact. Although the interaction between the ventricles, the atria and the pericardium plays an important role for the deformation of the heart, this aspect is usually not considered in computational models. In this work, we present an electromechanical model of the heart, which takes into account the interaction between ventricles, pericardium and atria and allows to reproduce the AVPD. To solve the contact problem of epicardium and pericardium, a contact handling algorithm based on penalty formulation was developed, which ensures frictionless and permanent contact. Two simulations of the ventricular contraction were conducted, one with contact handling of pericardium and heart and one without. In the simulation with contact handling, the atria were stretched during the contraction of the ventricles, while, due to the permanent contact with the pericardium, their volume increased. In contrast to that, in the simulations without pericardium, the atria were also stretched, but the change in the atrial volume was much smaller. Furthermore, the pericardium reduced the radial contraction of the ventricles and at the same time increased the AVPD.

  15. Effect of neurosteroids on a model lipid bilayer including cholesterol: An Atomic Force Microscopy study.

    PubMed

    Sacchi, Mattia; Balleza, Daniel; Vena, Giulia; Puia, Giulia; Facci, Paolo; Alessandrini, Andrea

    2015-05-01

    Amphiphilic molecules which have a biological effect on specific membrane proteins, could also affect lipid bilayer properties possibly resulting in a modulation of the overall membrane behavior. In light of this consideration, it is important to study the possible effects of amphiphilic molecule of pharmacological interest on model systems which recapitulate some of the main properties of the biological plasma membranes. In this work we studied the effect of a neurosteroid, Allopregnanolone (3α,5α-tetrahydroprogesterone or Allo), on a model bilayer composed by the ternary lipid mixture DOPC/bSM/chol. We chose ternary mixtures which present, at room temperature, a phase coexistence of liquid ordered (Lo) and liquid disordered (Ld) domains and which reside near to a critical point. We found that Allo, which is able to strongly partition in the lipid bilayer, induces a marked increase in the bilayer area and modifies the relative proportion of the two phases favoring the Ld phase. We also found that the neurosteroid shifts the miscibility temperature to higher values in a way similarly to what happens when the cholesterol concentration is decreased. Interestingly, an isoform of Allo, isoAllopregnanolone (3β,5α-tetrahydroprogesterone or isoAllo), known to inhibit the effects of Allo on GABAA receptors, has an opposite effect on the bilayer properties.

  16. An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models

    NASA Technical Reports Server (NTRS)

    Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.

    2001-01-01

    This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.

  17. Interpretation of thermoreflectance measurements with a two-temperature model including non-surface heat deposition

    SciTech Connect

    Regner, K. T.; Wei, L. C.; Malen, J. A.

    2015-12-21

    We develop a solution to the two-temperature diffusion equation in axisymmetric cylindrical coordinates to model heat transport in thermoreflectance experiments. Our solution builds upon prior solutions that account for two-channel diffusion in each layer of an N-layered geometry, but adds the ability to deposit heat at any location within each layer. We use this solution to account for non-surface heating in the transducer layer of thermoreflectance experiments that challenge the timescales of electron-phonon coupling. A sensitivity analysis is performed to identify important parameters in the solution and to establish a guideline for when to use the two-temperature model to interpret thermoreflectance data. We then fit broadband frequency domain thermoreflectance (BB-FDTR) measurements of SiO{sub 2} and platinum at a temperature of 300 K with our two-temperature solution to parameterize the gold/chromium transducer layer. We then refit BB-FDTR measurements of silicon and find that accounting for non-equilibrium between electrons and phonons in the gold layer does lessen the previously observed heating frequency dependence reported in Regner et al. [Nat. Commun. 4, 1640 (2013)] but does not completely eliminate it. We perform BB-FDTR experiments on silicon with an aluminum transducer and find limited heating frequency dependence, in agreement with time domain thermoreflectance results. We hypothesize that the discrepancy between thermoreflectance measurements with different transducers results in part from spectrally dependent phonon transmission at the transducer/silicon interface.

  18. Modeling radiation dosimetry to predict cognitive outcomes in pediatric patients with CNS embryonal tumors including medulloblastoma

    SciTech Connect

    Merchant, Thomas E. . E-mail: thomas.merchant@stjude.org; Kiehna, Erin N.; Li Chenghong; Shukla, Hemant; Sengupta, Saikat; Xiong Xiaoping; Gajjar, Amar; Mulhern, Raymond K.

    2006-05-01

    Purpose: Model the effects of radiation dosimetry on IQ among pediatric patients with central nervous system (CNS) tumors. Methods and Materials: Pediatric patients with CNS embryonal tumors (n = 39) were prospectively evaluated with serial cognitive testing, before and after treatment with postoperative, risk-adapted craniospinal irradiation (CSI) and conformal primary-site irradiation, followed by chemotherapy. Differential dose-volume data for 5 brain volumes (total brain, supratentorial brain, infratentorial brain, and left and right temporal lobes) were correlated with IQ after surgery and at follow-up by use of linear regression. Results: When the dose distribution was partitioned into 2 levels, both had a significantly negative effect on longitudinal IQ across all 5 brain volumes. When the dose distribution was partitioned into 3 levels (low, medium, and high), exposure to the supratentorial brain appeared to have the most significant impact. For most models, each Gy of exposure had a similar effect on IQ decline, regardless of dose level. Conclusions: Our results suggest that radiation dosimetry data from 5 brain volumes can be used to predict decline in longitudinal IQ. Despite measures to reduce radiation dose and treatment volume, the volume that receives the highest dose continues to have the greatest effect, which supports current volume-reduction efforts.

  19. Modelling of metal vapour in pulsed TIG including influence of self-absorption

    NASA Astrophysics Data System (ADS)

    Iwao, Toru; Mori, Yusuke; Okubo, Masato; Sakai, Tadashi; Tashiro, Shinichi; Tanaka, Manabu; Yumoto, Motoshige

    2010-11-01

    Pulsed TIG (tungsten inert gas) welding is used to improve the stability and speed of arc welding, and to allow greater control over the heat input to the weld. The temperature and the radiation power density of the pulsed arc vary as a function of time, as does the distribution of metal vapour, and its effects on the arc. A self-consistent two-dimensional model of the arc and electrodes is used to calculate the properties of the arc as a function of time. Self-absorption of radiation is treated by two methods, one taking into account absorption of radiation only within the control volume of emission, and the other taking into account absorption throughout the plasma. The relation between metal vapour and radiation power density is analysed by calculating the iron vapour distribution. The results show that the transport of iron vapour is strongly affected by the fast convective flow during the peak current period. During the base current period, the region containing a low concentration of metal vapour expands because of the low convective flow. The iron vapour distribution does not closely follow the current pulses. The temperature, iron vapour and radiation power density distributions depend on the self-absorption model used. The temperature distribution becomes broader when self-absorption of radiation from all directions is considered.

  20. Three-Dimensional Magnetohydrodynamic Modeling of the Solar Wind Including Pickup Protons and Turbulence Transport

    NASA Technical Reports Server (NTRS)

    Usmanov, Arcadi V.; Goldstein, Melvyn L.; Matthaeus, William H.

    2012-01-01

    To study the effects of interstellar pickup protons and turbulence on the structure and dynamics of the solar wind, we have developed a fully three-dimensional magnetohydrodynamic solar wind model that treats interstellar pickup protons as a separate fluid and incorporates the transport of turbulence and turbulent heating. The governing system of equations combines the mean-field equations for the solar wind plasma, magnetic field, and pickup protons and the turbulence transport equations for the turbulent energy, normalized cross-helicity, and correlation length. The model equations account for photoionization of interstellar hydrogen atoms and their charge exchange with solar wind protons, energy transfer from pickup protons to solar wind protons, and plasma heating by turbulent dissipation. Separate mass and energy equations are used for the solar wind and pickup protons, though a single momentum equation is employed under the assumption that the pickup protons are comoving with the solar wind protons.We compute the global structure of the solar wind plasma, magnetic field, and turbulence in the region from 0.3 to 100 AU for a source magnetic dipole on the Sun tilted by 0 deg - .90 deg and compare our results with Voyager 2 observations. The results computed with and without pickup protons are superposed to evaluate quantitatively the deceleration and heating effects of pickup protons, the overall compression of the magnetic field in the outer heliosphere caused by deceleration, and the weakening of corotating interaction regions by the thermal pressure of pickup protons.

  1. Moonlet induced wakes in planetary rings: Analytical model including eccentric orbits of moon and ring particles

    NASA Astrophysics Data System (ADS)

    Seiß, M.; Spahn, F.; Schmidt, Jürgen

    2010-11-01

    Saturn's rings host two known moons, Pan and Daphnis, which are massive enough to clear circumferential gaps in the ring around their orbits. Both moons create wake patterns at the gap edges by gravitational deflection of the ring material (Cuzzi, J.N., Scargle, J.D. [1985]. Astrophys. J. 292, 276-290; Showalter, M.R., Cuzzi, J.N., Marouf, E.A., Esposito, L.W. [1986]. Icarus 66, 297-323). New Cassini observations revealed that these wavy edges deviate from the sinusoidal waveform, which one would expect from a theory that assumes a circular orbit of the perturbing moon and neglects particle interactions. Resonant perturbations of the edges by moons outside the ring system, as well as an eccentric orbit of the embedded moon, may partly explain this behavior (Porco, C.C., and 34 colleagues [2005]. Science 307, 1226-1236; Tiscareno, M.S., Burns, J.A., Hedman, M.M., Spitale, J.N., Porco, C.C., Murray, C.D., and the Cassini Imaging team [2005]. Bull. Am. Astron. Soc. 37, 767; Weiss, J.W., Porco, C.C., Tiscareno, M.S., Burns, J.A., Dones, L. [2005]. Bull. Am. Astron. Soc. 37, 767; Weiss, J.W., Porco, C.C., Tiscareno, M.S. [2009]. Astron. J. 138, 272-286). Here we present an extended non-collisional streamline model which accounts for both effects. We describe the resulting variations of the density structure and the modification of the nonlinearity parameter q. Furthermore, an estimate is given for the applicability of the model. We use the streamwire model introduced by Stewart (Stewart, G.R. [1991]. Icarus 94, 436-450) to plot the perturbed ring density at the gap edges. We apply our model to the Keeler gap edges undulated by Daphnis and to a faint ringlet in the Encke gap close to the orbit of Pan. The modulations of the latter ringlet, induced by the perturbations of Pan (Burns, J.A., Hedman, M.M., Tiscareno, M.S., Nicholson, P.D., Streetman, B.J., Colwell, J.E., Showalter, M.R., Murray, C.D., Cuzzi, J.N., Porco, C.C., and the Cassini ISS team [2005]. Bull. Am

  2. Configuration-space quantum-soliton model including loss and gain

    NASA Astrophysics Data System (ADS)

    Fini, John M.; Hagelstein, Peter L.; Haus, Hermann A.

    1999-09-01

    We examine the effects of loss and gain on a quantum soliton using a configuration-space approach. A simple microscopic model of local photon-matter interaction is applied to solitons with arbitrary quantum superpositions of momentum. Such a theory is needed in the analysis and design of systems that manipulate the wave function of soliton center-of-mass coordinates. The formalism is tested by calculating the momentum noise induced by loss and gain, and by comparison with the well-known Gordon-Haus calculation [J. P. Gordon and H. A. Haus, Opt. Lett. 11, 665 (1986)]. The comparison provides physical insight and reproduces the old result as a special case.

  3. Parsing recursive sentences with a connectionist model including a neural stack and synaptic gating.

    PubMed

    Fedor, Anna; Ittzés, Péter; Szathmáry, Eörs

    2011-02-21

    It is supposed that humans are genetically predisposed to be able to recognize sequences of context-free grammars with centre-embedded recursion while other primates are restricted to the recognition of finite state grammars with tail-recursion. Our aim was to construct a minimalist neural network that is able to parse artificial sentences of both grammars in an efficient way without using the biologically unrealistic backpropagation algorithm. The core of this network is a neural stack-like memory where the push and pop operations are regulated by synaptic gating on the connections between the layers of the stack. The network correctly categorizes novel sentences of both grammars after training. We suggest that the introduction of the neural stack memory will turn out to be substantial for any biological 'hierarchical processor' and the minimalist design of the model suggests a quest for similar, realistic neural architectures.

  4. Robust and Adaptive OMR System Including Fuzzy Modeling, Fusion of Musical Rules, and Possible Error Detection

    NASA Astrophysics Data System (ADS)

    Rossant, Florence; Bloch, Isabelle

    2006-12-01

    This paper describes a system for optical music recognition (OMR) in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image analysis level and high-level interpretation. Thus, a recognition and segmentation method is designed, that allows dealing with common printing defects and numerous symbol interconnections. Then, musical rules are modeled and integrated, in order to make a consistent decision. This high-level interpretation step relies on the fuzzy sets and possibility framework, since it allows dealing with symbol variability, flexibility, and imprecision of music rules, and merging all these heterogeneous pieces of information. Other innovative features are the indication of potential errors and the possibility of applying learning procedures, in order to gain in robustness. Experiments conducted on a large data base show that the proposed method constitutes an interesting contribution to OMR.

  5. Energy-based fatigue model for shape memory alloys including thermomechanical coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong

    2016-03-01

    This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.

  6. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    NASA Astrophysics Data System (ADS)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  7. ECO: A Generic Eutrophication Model Including Comprehensive Sediment-Water Interaction

    PubMed Central

    Smits, Johannes G. C.; van Beek, Jan K. L.

    2013-01-01

    The content and calibration of the comprehensive generic 3D eutrophication model ECO for water and sediment quality is presented. Based on a computational grid for water and sediment, ECO is used as a tool for water quality management to simulate concentrations and mass fluxes of nutrients (N, P, Si), phytoplankton species, detrital organic matter, electron acceptors and related substances. ECO combines integral simulation of water and sediment quality with sediment diagenesis and closed mass balances. Its advanced process formulations for substances in the water column and the bed sediment were developed to allow for a much more dynamic calculation of the sediment-water exchange fluxes of nutrients as resulting from steep concentration gradients across the sediment-water interface than is possible with other eutrophication models. ECO is to more accurately calculate the accumulation of organic matter and nutrients in the sediment, and to allow for more accurate prediction of phytoplankton biomass and water quality in response to mitigative measures such as nutrient load reduction. ECO was calibrated for shallow Lake Veluwe (The Netherlands). Due to restoration measures this lake underwent a transition from hypertrophic conditions to moderately eutrophic conditions, leading to the extensive colonization by submerged macrophytes. ECO reproduces observed water quality well for the transition period of ten years. The values of its process coefficients are in line with ranges derived from literature. ECO’s calculation results underline the importance of redox processes and phosphate speciation for the nutrient return fluxes. Among other things, the results suggest that authigenic formation of a stable apatite-like mineral in the sediment can contribute significantly to oligotrophication of a lake after a phosphorus load reduction. PMID:23844160

  8. ECO: a generic eutrophication model including comprehensive sediment-water interaction.

    PubMed

    Smits, Johannes G C; van Beek, Jan K L

    2013-01-01

    The content and calibration of the comprehensive generic 3D eutrophication model ECO for water and sediment quality is presented. Based on a computational grid for water and sediment, ECO is used as a tool for water quality management to simulate concentrations and mass fluxes of nutrients (N, P, Si), phytoplankton species, detrital organic matter, electron acceptors and related substances. ECO combines integral simulation of water and sediment quality with sediment diagenesis and closed mass balances. Its advanced process formulations for substances in the water column and the bed sediment were developed to allow for a much more dynamic calculation of the sediment-water exchange fluxes of nutrients as resulting from steep concentration gradients across the sediment-water interface than is possible with other eutrophication models. ECO is to more accurately calculate the accumulation of organic matter and nutrients in the sediment, and to allow for more accurate prediction of phytoplankton biomass and water quality in response to mitigative measures such as nutrient load reduction. ECO was calibrated for shallow Lake Veluwe (The Netherlands). Due to restoration measures this lake underwent a transition from hypertrophic conditions to moderately eutrophic conditions, leading to the extensive colonization by submerged macrophytes. ECO reproduces observed water quality well for the transition period of ten years. The values of its process coefficients are in line with ranges derived from literature. ECO's calculation results underline the importance of redox processes and phosphate speciation for the nutrient return fluxes. Among other things, the results suggest that authigenic formation of a stable apatite-like mineral in the sediment can contribute significantly to oligotrophication of a lake after a phosphorus load reduction.

  9. Dynamic modeling of slow-light in a semiconductor optical amplifier including the effects of forced coherent population oscillations by bias current modulation

    NASA Astrophysics Data System (ADS)

    Connelly, M. J.

    2014-05-01

    The slow light effect in SOAs has many applications in microwave photonics such as phase shifting and filtering. Models are needed to predict slow light in SOAs and its dependence on the bias current, optical power and modulation index. In this paper we predict the slow light characteristics of a tensile-strained SOA by using a detailed time-domain model. The model includes full band-structure based calculations of the material gain, bimolecular recombination and spontaneous emission, a carrier density rate equation and travelling wave equations for the input signal and amplified spontaneous emission. The slow light effect is caused by coherent population oscillations, whereby beating between the spectral components of an amplitude modulated lightwave causes carrier density oscillations at the beat frequency, leading to changes in the group velocity. The resulting beat signal at the SOA output after photodetection, is phase shifted relative to the SOA input beat signal. The phase shift can be adjusted by controlling the optical power and bias current. However the beat signal gain is low at low frequencies, leading to a poor beat signal output signal-to-noise ratio. If the optical input and SOA drive current are simultaneously modulated, this leads to forced population oscillations that greatly enhance the low frequency beat signal gain. The model is used to determine the improvement in gain and phase response and its dependency on the optical power, bias current and modulation index. Model predictions show good agreement with experimental trends reported in the literature.

  10. Noninvasive model including right ventricular speckle tracking for the evaluation of pulmonary hypertension

    PubMed Central

    Mahran, Yossra; Schueler, Robert; Weber, Marcel; Pizarro, Carmen; Nickenig, Georg; Skowasch, Dirk; Hammerstingl, Christoph

    2016-01-01

    AIM To find parameters from transthorathic echocardiography (TTE) including speckle-tracking (ST) analysis of the right ventricle (RV) to identify precapillary pulmonary hypertension (PH). METHODS Forty-four patients with suspected PH undergoing right heart catheterization (RHC) were consecutively included (mean age 63.1 ± 14 years, 61% male gender). All patients underwent standardized TTE including ST analysis of the RV. Based on the subsequent TTE-derived measurements, the presence of PH was assessed: Left ventricular ejection fraction (LVEF) was calculated by Simpsons rule from 4Ch. Systolic pulmonary artery pressure (sPAP) was assessed with continuous wave Doppler of systolic tricuspid regurgitant velocity and regarded raised with values ≥ 30 mmHg as a surrogate parameter for RA pressure. A concomitantly elevated PCWP was considered a means to discriminate between the precapillary and postcapillary form of PH. PCWP was considered elevated when the E/e’ ratio was > 12 as a surrogate for LV diastolic pressure. E/e’ ratio was measured by gauging systolic and diastolic velocities of the lateral and septal mitral valve annulus using TDI mode. The results were then averaged with conventional measurement of mitral valve inflow. Furthermore, functional testing with six minutes walking distance (6MWD), ECG-RV stress signs, NT pro-BNP and other laboratory values were assessed. RESULTS PH was confirmed in 34 patients (precapillary PH, n = 15, postcapillary PH, n = 19). TTE showed significant differences in E/e’ ratio (precapillary PH: 12.3 ± 4.4, postcapillary PH: 17.3 ± 10.3, no PH: 12.1 ± 4.5, P = 0.02), LV volumes (ESV: 25.0 ± 15.0 mL, 49.9 ± 29.5 mL, 32.2 ± 13.6 mL, P = 0.027; EDV: 73.6 ± 24.0 mL, 110.6 ± 31.8 mL, 87.8 ± 33.0 mL, P = 0.021) and systolic pulmonary arterial pressure (sPAP: 61.2 ± 22.3 mmHg, 53.6 ± 20.1 mmHg, 31.2 ± 24.6 mmHg, P = 0.001). STRV analysis showed significant differences for apical RV longitudinal strain (RVAS: -7.5% ± 5

  11. Noninvasive model including right ventricular speckle tracking for the evaluation of pulmonary hypertension

    PubMed Central

    Mahran, Yossra; Schueler, Robert; Weber, Marcel; Pizarro, Carmen; Nickenig, Georg; Skowasch, Dirk; Hammerstingl, Christoph

    2016-01-01

    AIM To find parameters from transthorathic echocardiography (TTE) including speckle-tracking (ST) analysis of the right ventricle (RV) to identify precapillary pulmonary hypertension (PH). METHODS Forty-four patients with suspected PH undergoing right heart catheterization (RHC) were consecutively included (mean age 63.1 ± 14 years, 61% male gender). All patients underwent standardized TTE including ST analysis of the RV. Based on the subsequent TTE-derived measurements, the presence of PH was assessed: Left ventricular ejection fraction (LVEF) was calculated by Simpsons rule from 4Ch. Systolic pulmonary artery pressure (sPAP) was assessed with continuous wave Doppler of systolic tricuspid regurgitant velocity and regarded raised with values ≥ 30 mmHg as a surrogate parameter for RA pressure. A concomitantly elevated PCWP was considered a means to discriminate between the precapillary and postcapillary form of PH. PCWP was considered elevated when the E/e’ ratio was > 12 as a surrogate for LV diastolic pressure. E/e’ ratio was measured by gauging systolic and diastolic velocities of the lateral and septal mitral valve annulus using TDI mode. The results were then averaged with conventional measurement of mitral valve inflow. Furthermore, functional testing with six minutes walking distance (6MWD), ECG-RV stress signs, NT pro-BNP and other laboratory values were assessed. RESULTS PH was confirmed in 34 patients (precapillary PH, n = 15, postcapillary PH, n = 19). TTE showed significant differences in E/e’ ratio (precapillary PH: 12.3 ± 4.4, postcapillary PH: 17.3 ± 10.3, no PH: 12.1 ± 4.5, P = 0.02), LV volumes (ESV: 25.0 ± 15.0 mL, 49.9 ± 29.5 mL, 32.2 ± 13.6 mL, P = 0.027; EDV: 73.6 ± 24.0 mL, 110.6 ± 31.8 mL, 87.8 ± 33.0 mL, P = 0.021) and systolic pulmonary arterial pressure (sPAP: 61.2 ± 22.3 mmHg, 53.6 ± 20.1 mmHg, 31.2 ± 24.6 mmHg, P = 0.001). STRV analysis showed significant differences for apical RV longitudinal strain (RVAS: -7.5% ± 5

  12. Development of a new fertility prediction model for stallion semen, including flow cytometry.

    PubMed

    Barrier Battut, I; Kempfer, A; Becker, J; Lebailly, L; Camugli, S; Chevrier, L

    2016-09-01

    Several laboratories routinely use flow cytometry to evaluate stallion semen quality. However, objective and practical tools for the on-field interpretation of data concerning fertilizing potential are scarce. A panel of nine tests, evaluating a large number of compartments or functions of the spermatozoa: motility, morphology, viability, mitochondrial activity, oxidation level, acrosome integrity, DNA integrity, "organization" of the plasma membrane, and hypoosmotic resistance, was applied to a population of 43 stallions, 33 of which showing widely differing fertilities (19%-84% pregnancy rate per cycle [PRC]). Analyses were performed either within 2 hours after semen collection or after 24-hour storage at 4 °C in INRA96 extender, on three to six ejaculates for each stallion. The aim was to provide data on the distribution of values among said population, showing within-stallion and between-stallion variability, and to determine whether appropriate combinations of tests could evaluate the fertilizing potential of each stallion. Within-stallion repeatability, defined as intrastallion correlation (r = between-stallion variance/total variance) ranged between 0.29 and 0.84 for "conventional" variables (viability, morphology, and motility), and between 0.15 and 0.81 for "cytometric" variables. Those data suggested that analyzing six ejaculates would be adequate to characterize a stallion. For most variables, except those related to DNA integrity and some motility variables, results differed significantly between immediately performed analyses and analyses performed after 24 hours at 4 °C. Two "best-fit" combinations of variables were determined. Factorial discriminant analysis using a first combination of seven variables, including the polarization of mitochondria, acrosome integrity, DNA integrity, and hypoosmotic resistance, permitted exact determination of the fertility group for each stallion: fertile, that is, PRC higher than 55%; intermediate, that is, 45

  13. Applying the Transactional Stress and Coping Model to Sickle Cell Disorder and Insulin-Dependent Diabetes Mellitus: Identifying Psychosocial Variables Related to Adjustment and Intervention

    ERIC Educational Resources Information Center

    Hocking, Matthew C.; Lochman, John E.

    2005-01-01

    This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…

  14. Analysis of Prey-Predator Three Species Fishery Model with Harvesting Including Prey Refuge and Migration

    NASA Astrophysics Data System (ADS)

    Roy, Sankar Kumar; Roy, Banani

    In this article, a prey-predator system with Holling type II functional response for the predator population including prey refuge region has been analyzed. Also a harvesting effort has been considered for the predator population. The density-dependent mortality rate for the prey, predator and super predator has been considered. The equilibria of the proposed system have been determined. Local and global stabilities for the system have been discussed. We have used the analytic approach to derive the global asymptotic stabilities of the system. The maximal predator per capita consumption rate has been considered as a bifurcation parameter to evaluate Hopf bifurcation in the neighborhood of interior equilibrium point. Also, we have used fishing effort to harvest predator population of the system as a control to develop a dynamic framework to investigate the optimal utilization of the resource, sustainability properties of the stock and the resource rent is earned from the resource. Finally, we have presented some numerical simulations to verify the analytic results and the system has been analyzed through graphical illustrations.

  15. Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model

    SciTech Connect

    Steven R. Sherman

    2007-05-01

    The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.

  16. Vertical motions in Northern Victoria Land inferred from GPS: A comparison with a glacial isostatic adjustment model

    USGS Publications Warehouse

    Mancini, F.; Negusini, M.; Zanutta, A.; Capra, A.

    2007-01-01

    Following the densification of GPS permanent and episodic trackers in Antarctica, geodetic observations are playing an increasing role in geodynamics research and the study of the glacial isostatic adjustment (GIA). The improvement in geodetic measurements accuracy suggests their use in constraining GIA models. It is essential to have a deeper knowledge on the sensitivity of GPS data to motionsrelated to long-term ice mass changes and the present-day mass imbalance of the ice sheets. In order to investigate the geodynamic phenomena in Northern Victoria Land (NVL), GPS geodetic observations were made during the last decade within the VLNDEF (Victoria Land Network for Deformation control) project. The processed data provided a picture of the motions occurring in NVL with a high level of accuracy and depicts, for the whole period, a well defined pattern of vertical motion. The comparison between GPS-derived vertical displacementsand GIA is addressed, showing a good degree of agreement and highlighting the future use of geodetic GPS measurements as constraints in GIA models. In spite of this agreement, the sensitivity of GPS vertical rates to non-GIA vertical motions has to be carefully evaluated.

  17. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  18. Parental Expressivity, Child Physiological and Behavioral Regulation, and Child Adjustment: Testing a Three-Path Mediation Model

    ERIC Educational Resources Information Center

    Liew, Jeffrey; Johnson, Audrea Y.; Smith, Tracy R.; Thoemmes, Felix

    2011-01-01

    Research Findings: Parental expressivity, child physiological regulation (indexed by respiratory sinus arrhythmia suppression), child behavioral regulation, and child adjustment outcomes were examined in 45 children (M age = 4.32 years, SD = 1.30) and their parents. With the exception of child adjustment (i.e., internalizing and externalizing…

  19. Models of Traumatic Experiences and Children's Psychological Adjustment: The Roles of Perceived Parenting and the Children's Own Resources and Activity.

    ERIC Educational Resources Information Center

    Punamaki, Raija-Leena; Qouta, Samir; El Sarraj, Eyad

    1997-01-01

    Used path analysis to examine relations between trauma, perceived parenting, resources, political activity, and adjustment in Palestinian 11- and 12-year olds. Found that the more trauma experienced, the more negative parenting the children experienced, the more political activity they showed, and the more they suffered from adjustment problems.…

  20. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  1. Direct radiative effect modeled for regional aerosols in central Europe including the effect of relative humidity

    NASA Astrophysics Data System (ADS)

    Iorga, G.; Hitzenberger, R.; Kasper-Giebl, A.; Puxbaum, Hans

    2007-01-01

    In view of both the climatic relevance of aerosols and the fact that aerosol burdens in central Europe are heavily impacted by anthropogenic sources, this study is focused on estimating the regional-scale direct radiative effect of aerosols in Austria. The aerosol data (over 80 samples in total) were collected during measurement campaigns at five sampling sites: the urban areas of Vienna, Linz, and Graz and on Mt. Rax (1644 m, regional background aerosol) and Mt. Sonnblick (3106 m, background aerosol). Aerosol mass size distributions were obtained with eight-stage (size range: 0.06-16 μm diameter) and six-stage (size range 0.1-10 μm) low-pressure cascade impactors. The size-segregated samples were analyzed for total carbon (TC), black carbon (BC), and inorganic ions. The aerosol at these five locations is compared in terms of size distributions, optical properties, and direct forcing. Mie calculations are performed for the dry aerosol at 60 wavelengths in the range 0.3-40 μm. Using mass growth factors determined earlier, the optical properties are also estimated for higher relative humidities (60%, 70%, 80%, and 90%). A box model was used to estimate direct radiative forcing (DRF). The presence of absorbing species (BC) was found to reduce the cooling effect of the aerosols. The water-soluble substances dominate radiative forcing at the urban sites, while on Rax and Sonnblick BC plays the most important role. This result can be explained by the effect of the surface albedo, which is much lower in the urban regions (0.16) than at the ice and snow-covered mountain sites. Shortwave (below 4 μm) and longwave surface albedo values for ice were 0.35 and 0.5, while for snow surface albedo, values of 0.8 (shortwave) and 0.5 (longwave) were used. In the case of dry aerosol, especially for urban sites, the unidentified material may contribute a large part to the forcing. Depending on the sampling site the estimated forcing gets more negative with increasing humidity

  2. Mechanisms Determining the Atlantic Thermohaline Circulation Response to Greenhouse Gas Forcing in a Non-Flux-Adjusted Coupled Climate Model.

    NASA Astrophysics Data System (ADS)

    Thorpe, R. B.; Gregory, J. M.; Johns, T. C.; Wood, R. A.; Mitchell, J. F. B.

    2001-07-01

    Models of the North Atlantic thermohaline circulation (THC) show a range of responses to the high-latitude warming and freshening characteristic of global warming scenarios. Most simulate a weakening of the THC, with some suggesting possible interruption of the circulation, but others exhibit little change. The mechanisms of the THC response to climate change using the HadCM3 coupled ocean-atmosphere general circulation model, which gives a good simulation of the present-day THC and does not require flux adjustment, were studied. In a range of climate change simulations, the strength of the THC in HadCM3 is proportional to the meridional gradient of steric height (equivalent to column-integrated density) between 30°S and 60°N. During an integration in which CO2 increases at 2% per year for 70 yr, the THC weakens by about 20%, and it stabilizes at this level if the CO2 is subsequently held constant. Changes in surface heat and water fluxes are the cause of the reduction in the steric height gradient that derives the THC weakening, 60% being due to temperature change (greater warming at high latitudes) and 40% to salinity change (decreasing at high latitude, increasing at low latitude). The level at which the THC stabilizes is determined by advective feedbacks. As the circulation slows down, less heat is advected northward, which counteracts the in situ warming. At the same time, northward salinity advection increases because of a strong increase in salinity in the subtropical Atlantic, due to a greater atmospheric export of freshwater from the Atlantic to the Pacific. This change in interbasin transport means that salinity effects stabilize the circulation, in contrast to a single basin model of the THC, where salinity effects are destabilizing. These results suggest that the response of the Atlantic THC to anthropogenic forcing may be partly determined by events occurring outside the Atlantic basin.

  3. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    SciTech Connect

    Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable

  4. Political Violence and Child Adjustment in Northern Ireland: Testing Pathways in a Social-Ecological Model Including Single- and Two-Parent Families

    ERIC Educational Resources Information Center

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…

  5. Adjustment disorder

    MedlinePlus

    ... the event may become too much for you. Stressors for people of any age include: Death of ... the following: The symptoms clearly come after a stressor, most often within 3 months The symptoms are ...

  6. Holocene sea-level changes along the North Carolina Coastline and their implications for glacial isostatic adjustment models

    USGS Publications Warehouse

    Horton, B.P.; Peltier, W.R.; Culver, S.J.; Drummond, R.; Engelhart, S.E.; Kemp, A.C.; Mallinson, D.; Thieler, E.R.; Riggs, S.R.; Ames, D.V.; Thomson, K.H.

    2009-01-01

    We have synthesized new and existing relative sea-level (RSL) data to produce a quality-controlled, spatially comprehensive database from the North Carolina coastline. The RSL database consists of 54 sea-level index points that are quantitatively related to an appropriate tide level and assigned an error estimate, and a further 33 limiting dates that confine the maximum and minimum elevations of RSL. The temporal distribution of the index points is very uneven with only five index points older than 4000 cal a BP, but the form of the Holocene sea-level trend is constrained by both terrestrial and marine limiting dates. The data illustrate RSL rapidly rising during the early and mid Holocene from an observed elevation of -35.7 ?? 1.1 m MSL at 11062-10576 cal a BP to -4.2 m ?? 0.4 m MSL at 4240-3592 cal a BP. We restricted comparisons between observations and predictions from the ICE-5G(VM2) with rotational feedback Glacial Isostatic Adjustment (GIA) model to the Late Holocene RSL (last 4000 cal a BP) because of the wealth of sea-level data during this time interval. The ICE-5G(VM2) model predicts significant spatial variations in RSL across North Carolina, thus we subdivided the observations into two regions. The model forecasts an increase in the rate of sea-level rise in Region 1 (Albemarle, Currituck, Roanoke, Croatan, and northern Pamlico sounds) compared to Region 2 (southern Pamlico, Core and Bogue sounds, and farther south to Wilmington). The observations show Late Holocene sea-level rising at 1.14 ?? 0.03 mm year-1 and 0.82 ?? 0.02 mm year-1 in Regions 1 and 2, respectively. The ICE-5G(VM2) predictions capture the general temporal trend of the observations, although there is an apparent misfit for index points older than 2000 cal a BP. It is presently unknown whether these misfits are caused by possible tectonic uplift associated with the mid-Carolina Platform High or a flaw in the GIA model. A comparison of local tide gauge data with the Late Holocene RSL

  7. Mathematical Model of Two Phase Flow in Natural Draft Wet-Cooling Tower Including Flue Gas Injection

    NASA Astrophysics Data System (ADS)

    Hyhlík, Tomáš

    2016-03-01

    The previously developed model of natural draft wet-cooling tower flow, heat and mass transfer is extended to be able to take into account the flow of supersaturated moist air. The two phase flow model is based on void fraction of gas phase which is included in the governing equations. Homogeneous equilibrium model, where the two phases are well mixed and have the same velocity, is used. The effect of flue gas injection is included into the developed mathematical model by using source terms in governing equations and by using momentum flux coefficient and kinetic energy flux coefficient. Heat and mass transfer in the fill zone is described by the system of ordinary differential equations, where the mass transfer is represented by measured fill Merkel number and heat transfer is calculated using prescribed Lewis factor.

  8. A Test of the Family Stress Model on Toddler-Aged Children's Adjustment among Hurricane Katrina Impacted and Nonimpacted Low-Income Families

    ERIC Educational Resources Information Center

    Scaramella, Laura V.; Sohr-Preston, Sara L.; Callahan, Kristin L.; Mirabile, Scott P.

    2008-01-01

    Hurricane Katrina dramatically altered the level of social and environmental stressors for the residents of the New Orleans area. The Family Stress Model describes a process whereby felt financial strain undermines parents' mental health, the quality of family relationships, and child adjustment. Our study considered the extent to which the Family…

  9. Modeling rear-end collisions including the role of driver's visibility and light truck vehicles using a nested logit structure.

    PubMed

    Abdel-Aty, Mohamed; Abdelwahab, Hassan

    2004-05-01

    This paper presents an analysis of the effect of the geometric incompatibility of light truck vehicles (LTV)--light-duty trucks, vans, and sport utility vehicles--on drivers' visibility of other passenger cars involved in rear-end collisions. The geometric incompatibility arises from the fact that most LTVs ride higher and are wider than regular passenger cars. The objective of this paper is to explore the effect of the lead vehicle's size on the rear-end crash configuration. Four rear-end crash configurations are defined based on the type of the two involved vehicles (lead and following vehicles). Nested logit models were calibrated to estimate the probabilities of the four rear-end crash configurations as a function of driver's age, gender, vehicle type, vehicle maneuver, light conditions, driver's visibility and speed. Results showed that driver's visibility and inattention in the following (striker) vehicle have the largest effect on being involved in a rear-end collision of configuration CarTrk (a regular passenger car striking an LTV). Possibly, indicating a sight distance problem. A driver of a smaller car following an LTV, have a problem seeing the roadway beyond the LTV, and therefore would not be able to adjust his/her speed accordingly, increasing the probability of a rear-end collision. Also, the probability of a CarTrk rear-end crash increases in the case that the lead vehicle stops suddenly. PMID:15003590

  10. CERAMIC: Case-Control Association Testing in Samples with Related Individuals, Based on Retrospective Mixed Model Analysis with Adjustment for Covariates

    PubMed Central

    Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM) approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype), because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D) from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan. PMID:27695091

  11. Modeling of Turbulent Boundary Layer Surface Pressure Fluctuation Auto and Cross Spectra - Verification and Adjustments Based on TU-144LL Data

    NASA Technical Reports Server (NTRS)

    Rackl, Robert; Weston, Adam

    2005-01-01

    The literature on turbulent boundary layer pressure fluctuations provides several empirical models which were compared to the measured TU-144 data. The Efimtsov model showed the best agreement. Adjustments were made to improve its agreement further, consisting of the addition of a broad band peak in the mid frequencies, and a minor modification to the high frequency rolloff. The adjusted Efimtsov predicted and measured results are compared for both subsonic and supersonic flight conditions. Measurements in the forward and middle portions of the fuselage have better agreement with the model than those from the aft portion. For High Speed Civil Transport supersonic cruise, interior levels predicted by use of this model are expected to increase by 1-3 dB due to the adjustments to the Efimtsov model. The space-time cross-correlations and cross-spectra of the fluctuating surface pressure were also investigated. This analysis is an important ingredient in structural acoustic models of aircraft interior noise. Once again the measured data were compared to the predicted levels from the Efimtsov model.

  12. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 1. [thermal analyzer manual

    NASA Technical Reports Server (NTRS)

    Lee, H. P.

    1977-01-01

    The NASTRAN Thermal Analyzer Manual describes the fundamental and theoretical treatment of the finite element method, with emphasis on the derivations of the constituent matrices of different elements and solution algorithms. Necessary information and data relating to the practical applications of engineering modeling are included.

  13. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  14. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola

    PubMed Central

    Liu, Wendi; Tang, Sanyi; Xiao, Yanni

    2015-01-01

    The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537)), 1.2101 (95% CI (1.2084, 1.2119)), 3.0234 (95% CI (2.6063, 3.4881)), and 1.9018 (95% CI (1.8565, 1.9478)), the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958)), 3916 (95% CI (3865, 3967)), 9886 (95% CI (9740, 10031)), and 12633 (95% CI (12515, 12750)) for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic. PMID:26451161

  15. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola.

    PubMed

    Liu, Wendi; Tang, Sanyi; Xiao, Yanni

    2015-01-01

    The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537)), 1.2101 (95% CI (1.2084, 1.2119)), 3.0234 (95% CI (2.6063, 3.4881)), and 1.9018 (95% CI (1.8565, 1.9478)), the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958)), 3916 (95% CI (3865, 3967)), 9886 (95% CI (9740, 10031)), and 12633 (95% CI (12515, 12750)) for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic. PMID:26451161

  16. Mathematical multi-scale model of the cardiovascular system including mitral valve dynamics. Application to ischemic mitral insufficiency

    PubMed Central

    2011-01-01

    Background Valve dysfunction is a common cardiovascular pathology. Despite significant clinical research, there is little formal study of how valve dysfunction affects overall circulatory dynamics. Validated models would offer the ability to better understand these dynamics and thus optimize diagnosis, as well as surgical and other interventions. Methods A cardiovascular and circulatory system (CVS) model has already been validated in silico, and in several animal model studies. It accounts for valve dynamics using Heaviside functions to simulate a physiologically accurate "open on pressure, close on flow" law. However, it does not consider real-time valve opening dynamics and therefore does not fully capture valve dysfunction, particularly where the dysfunction involves partial closure. This research describes an updated version of this previous closed-loop CVS model that includes the progressive opening of the mitral valve, and is defined over the full cardiac cycle. Results Simulations of the cardiovascular system with healthy mitral valve are performed, and, the global hemodynamic behaviour is studied compared with previously validated results. The error between resulting pressure-volume (PV) loops of already validated CVS model and the new CVS model that includes the progressive opening of the mitral valve is assessed and remains within typical measurement error and variability. Simulations of ischemic mitral insufficiency are also performed. Pressure-Volume loops, transmitral flow evolution and mitral valve aperture area evolution follow reported measurements in shape, amplitude and trends. Conclusions The resulting cardiovascular system model including mitral valve dynamics provides a foundation for clinical validation and the study of valvular dysfunction in vivo. The overall models and results could readily be generalised to other cardiac valves. PMID:21942971

  17. Generalizing the correlated chromophore domain model of reversible photodegradation to include the effects of an applied electric field

    NASA Astrophysics Data System (ADS)

    Anderson, Benjamin; Kuzyk, Mark G.

    2014-03-01

    All observations of photodegradation and self-healing follow the predictions of the correlated chromophore domain model [Ramini et al., Polym. Chem. 4, 4948 (2013), 10.1039/c3py00263b]. In the present work, we generalize the domain model to describe the effects of an electric field by including induced dipole interactions between molecules in a domain by means of a self-consistent field approach. This electric field correction is added to the statistical mechanical model to calculate the distribution of domains that are central to healing. Also included in the model are the dynamics due to the formation of an irreversibly damaged species, which we propose involves damage to the polymer mediated through energy transfer from a dopant molecule after absorbing a photon. As in previous studies, the model with one-dimensional domains best explains all experimental data of the population as a function of time, temperature, intensity, concentration, and now applied electric field. Though the precise nature of a domain is yet to be determined, the fact that only one-dimensional domain models are consistent with observations suggests that they might be made of correlated dye molecules along polymer chains. Furthermore, the voltage-dependent measurements suggest that the largest polarizability axis of the molecules are oriented perpendicular to the chain.

  18. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  19. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  20. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2013-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  1. Validation of the Greek maternal adjustment and maternal attitudes scale for assessing early postpartum adjustment.

    PubMed

    Vivilaki, Victoria G; Dafermos, Vassilis; Gevorgian, Liana; Dimopoulou, Athanasia; Patelarou, Evridiki; Bick, Debra; Tsopelas, Nicholas D; Lionis, Christos

    2012-01-01

    The Maternal Adjustment and Maternal Attitudes Scale is a self- administered scale, designed for use in primary care settings to identify postpartum maternal adjustment problems regarding body image, sex, somatic symptoms, and marital relationships. Women were recruited within four weeks of giving birth. Responses to the Maternal Adjustment and Maternal Attitudes Scale were compared for agreement with responses to the Edinburgh Postnatal Depression Scale as a gold standard. Psychometric measurements included: reliability coefficients, explanatory factor analysis, and confirmatory analysis by linear structural relations. A receiver operating characteristic analysis was carried out to evaluate the global functioning of the scale. Of 300 mothers screened, 121 (40.7%) were experiencing difficulties in maternal adjustment and maternal attitudes. Scores on the Maternal Adjustment and Maternal Attitudes Scale correlated well with those on the Edinburgh Postnatal Depression Scale. The internal consistency of the Maternal Adjustment and Maternal Attitudes Scale, Greek version-tested using Cronbach's alpha coefficient-was 0.859, and that of Guttman split-half coefficient was 0.820. Findings confirmed the multidimensionality of the Maternal Adjustment and Maternal Attitudes Scale, demonstrating a six-factor structure. The area under the receiver operating characteristic curve was 0.610, and the logistic estimate for the threshold score of 57/58 fitted the model sensitivity at 68% and model specificity at 64.6%. Data confirmed that the Greek version of the Maternal Adjustment and Maternal Attitudes Scale is a reliable and valid screening tool for both clinical practice and research purposes to detect postpartum adjustment difficulties.

  2. A performance model of Thermal Imaging System (TISs) which includes the human observer's response to state of the art displays

    NASA Astrophysics Data System (ADS)

    Blanchard, Denise M.

    1991-09-01

    This paper presents a model for predicting the performance of thermal imaging systems (TISs). This model combines conventional modeling relationships and recently reported characteristics of display monitors to determine the signal-to-noise ratio (SNR) out of the TIS. Also included are the results of psychophysical experiments which evaluated the capability of a human observer to detect the presence of an object displayed on the same monitor. The model is then used to determine the noise equivalent temperature difference (NEdeltaT) based on background photon noise limited (BLIP) operating conditions of the TIS. Finally, the minimum detectable temperature difference (MDT) in the scene is determined from the maximum signal-to-noise ratio of the monitor.

  3. A simple model of the right atrium of the human heart with the sinoatrial and atrioventricular nodes included.

    PubMed

    Podziemski, Piotr; Zebrowski, Jan J

    2013-08-01

    Existing atrial models with detailed anatomical structure and multi-variable cardiac transmembrane current models are too complex to allow to combine an investigation of long time dycal properties of the heart rhythm with the ability to effectively simulate cardiac electrical activity during arrhythmia. Other ways of modeling need to be investigated. Moreover, many state-of-the-art models of the right atrium do not include an atrioventricular node (AVN) and only rarely--the sinoatrial node (SAN). A model of the heart tissue within the right atrium including the SAN and AVN nodes was developed. Looking for a minimal model, currently we are testing our approach on chosen well-known arrhythmias, which were until now obtained only using much more complicated models, or were only observed in a clinical setting. Ultimately, the goal is to obtain a model able to generate sequences of RR intervals specific for the arrhythmias involving the AV junction as well as for other phenomena occurring within the atrium. The model should be fast enough to allow the study of heart rate variability and arrhythmias at a time scale of thousands of heart beats in real-time. In the model of the right atrium proposed here, different kinds of cardiac tissues are described by sets of different equations, with most of them belonging to the class of Liénard nonlinear dynamical systems. We have developed a series of models of the right atrium with differing anatomical simplifications, in the form of a 2D mapping of the atrium or of an idealized cylindrical geometry, including only those anatomical details required to reproduce a given physiological phenomenon. The simulations allowed to reconstruct the phase relations between the sinus rhythm and the location and properties of a parasystolic source together with the effect of this source on the resultant heart rhythm. We model the action potential conduction time alternans through the atrioventricular AVN junction observed in cardiac tissue in

  4. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  5. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam.

    PubMed

    Marsolat, F; De Marzi, L; Pouzoulet, F; Mazal, A

    2016-01-21

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens' model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens' model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens' model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm(-1). These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  6. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam.

    PubMed

    Marsolat, F; De Marzi, L; Pouzoulet, F; Mazal, A

    2016-01-21

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens' model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens' model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens' model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm(-1). These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. PMID:26732530

  7. Time domain contact model for tyre/road interaction including nonlinear contact stiffness due to small-scale roughness

    NASA Astrophysics Data System (ADS)

    Andersson, P. B. U.; Kropp, W.

    2008-11-01

    Rolling resistance, traction, wear, excitation of vibrations, and noise generation are all attributes to consider in optimisation of the interaction between automotive tyres and wearing courses of roads. The key to understand and describe the interaction is to include a wide range of length scales in the description of the contact geometry. This means including scales on the order of micrometres that have been neglected in previous tyre/road interaction models. A time domain contact model for the tyre/road interaction that includes interfacial details is presented. The contact geometry is discretised into multiple elements forming pairs of matching points. The dynamic response of the tyre is calculated by convolving the contact forces with pre-calculated Green's functions. The smaller-length scales are included by using constitutive interfacial relations, i.e. by using nonlinear contact springs, for each pair of contact elements. The method is presented for normal (out-of-plane) contact and a method for assessing the stiffness of the nonlinear springs based on detailed geometry and elastic data of the tread is suggested. The governing equations of the nonlinear contact problem are solved with the Newton-Raphson iterative scheme. Relations between force, indentation, and contact stiffness are calculated for a single tread block in contact with a road surface. The calculated results have the same character as results from measurements found in literature. Comparison to traditional contact formulations shows that the effect of the small-scale roughness is large; the contact stiffness is only up to half of the stiffness that would result if contact is made over the whole element directly to the bulk of the tread. It is concluded that the suggested contact formulation is a suitable model to include more details of the contact interface. Further, the presented result for the tread block in contact with the road is a suitable input for a global tyre/road interaction model

  8. Parental concern about vaccine safety in Canadian children partially immunized at age 2: a multivariable model including system level factors.

    PubMed

    MacDonald, Shannon E; Schopflocher, Donald P; Vaudry, Wendy

    2014-01-01

    Children who begin but do not fully complete the recommended series of childhood vaccines by 2 y of age are a much larger group than those who receive no vaccines. While parents who refuse all vaccines typically express concern about vaccine safety, it is critical to determine what influences parents of 'partially' immunized children. This case-control study examined whether parental concern about vaccine safety was responsible for partial immunization, and whether other personal or system-level factors played an important role. A random sample of parents of partially and completely immunized 2 y old children were selected from a Canadian regional immunization registry and completed a postal survey assessing various personal and system-level factors. Unadjusted odds ratios (OR) and adjusted ORs (aOR) were calculated with logistic regression. While vaccine safety concern was associated with partial immunization (OR 7.338, 95% CI 4.138-13.012), other variables were more strongly associated and reduced the strength of the relationship between concern and partial immunization in multivariable analysis (aOR 2.829, 95% CI 1.151-6.957). Other important factors included perceived disease susceptibility and severity (aOR 4.629, 95% CI 2.017-10.625), residential mobility (aOR 3.908, 95% CI 2.075-7.358), daycare use (aOR 0.310, 95% CI 0.144-0.671), number of needles administered at each visit (aOR 7.734, 95% CI 2.598-23.025) and access to a regular physician (aOR 0.219, 95% CI 0.057-0.846). While concern about vaccine safety may be addressed through educational strategies, this study suggests that additional program and policy-level strategies may positively impact immunization uptake.

  9. The Impact of Children's Social Adjustment on Academic Outcomes.

    PubMed

    Derosier, Melissa E; Lloyd, Stacey W

    2011-01-01

    This study tested whether social adjustment added to the prediction of academic outcomes above and beyond prior academic functioning. School records and peer-, teacher-, and self-report measures were collected for 1,255 third grade children in the fall and spring of the school year. Social acceptance by and aggression with peers were included as measures of social adjustment. Academic outcomes included math and reading GPA, classroom behavior, academic self-esteem, and absenteeism. As expected, support for the causal model was found where both forms of social adjustment contributed independently to the prediction of each area of academic adjustment. Gender differences in the patterns of results were present, particularly for the impact of aggression on academic adjustment. Discussion focuses on the implications for social-emotional literacy programs to prevent negative academic outcomes.

  10. A simple, analytic model of polymer electrolyte membrane fuel cell anode recirculation at operating power including nitrogen crossover

    NASA Astrophysics Data System (ADS)

    Promislow, Keith; St-Pierre, Jean; Wetton, Brian

    A simple, analytic model is presented that describes the steady state profile of anode nitrogen concentration in a polymer electrolyte membrane fuel cell operated with anode recirculation. The model is appropriate for fuel cells with straight gas channels and includes the effect of nitrogen crossover from cathode to anode through the membrane. The key analytic simplification in the model is that this crossover rate, when scaled to the gas flows in the channels, is small. This is a good approximation when the device is used at operating power levels. The model shows that the characteristic times for the anode nitrogen profiles to reach steady state are of the order of minutes and that the dilution effect of anode nitrogen is severe for pure recirculation. The model shows additionally that a small anode outlet bleed can significantly reduce the nitrogen dilution effect. Within the framework of the model, the energy efficiency of pure recirculation can be compared to hydrogen venting or partial anode bleeding. An optimal bleed rate is identified. The model and optimization analysis can be adapted to other fuel cell designs and operating conditions. Along with operating conditions, only two key parameters are needed: a nitrogen crossover coefficient and the marginal efficiency loss to compressors for increased anode stoichiometric gas flow.

  11. Capitation pricing: adjusting for prior utilization and physician discretion.

    PubMed

    Anderson, G F; Cantor, J C; Steinberg, E P; Holloway, J

    1986-01-01

    As the number of Medicare beneficiaries receiving care under at-risk capitation arrangements increases, the method for setting payment rates will come under increasing scrutiny. A number of modifications to the current adjusted average per capita cost (AAPCC) methodology have been proposed, including an adjustment for prior utilization. In this article, we propose use of a utilization adjustment that includes only hospitalizations involving low or moderate physician discretion in the decision to hospitalize. This modification avoids discrimination against capitated systems that prevent certain discretionary admissions. The model also explains more of the variance in per capita expenditures than does the current AAPCC. PMID:10312010

  12. Adaptation of model proteins from cold to hot environments involves continuous and small adjustments of average parameters related to amino acid composition.

    PubMed

    De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario

    2008-01-01

    The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was

  13. Data for and adjusted regional regression models of volume and quality of urban storm-water runoff in Boise and Garden City, Idaho, 1993-94

    USGS Publications Warehouse

    Kjelstrom, L.C.

    1995-01-01

    Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.

  14. Development and Validation of a Brief Version of the Dyadic Adjustment Scale With a Nonparametric Item Analysis Model

    ERIC Educational Resources Information Center

    Sabourin, Stephane; Valois, Pierre; Lussier, Yvan

    2005-01-01

    The main purpose of the current research was to develop an abbreviated form of the Dyadic Adjustment Scale (DAS) with nonparametric item response theory. The authors conducted 5 studies, with a total participation of 8,256 married or cohabiting individuals. Results showed that the item characteristic curves behaved in a monotonically increasing…

  15. The mathematical models of electromagnetic field dynamics and heat transfer in closed electrical contacts including Thomson effect

    NASA Astrophysics Data System (ADS)

    Kharin, Stanislav; Sarsengeldin, Merey; Kassabek, Samat

    2016-08-01

    We represent mathematical models of electromagnetic field dynamics and heat transfer in closed symmetric and asymmetric electrical contacts including Thomson effect, which are essentially nonlinear due to the dependence of thermal and electrical conductivities on temperature. Suggested solutions are based on the assumption of identity of equipotentials and isothermal surfaces, which agrees with experimental data and valid for both linear and nonlinear cases. Well known Kohlrausch temperature-potential relation is analytically justified.

  16. The use of food consumption data in assessments of exposure to food chemicals including the application of probabilistic modelling.

    PubMed

    Lambe, Joyce

    2002-02-01

    Emphasis on public health and consumer protection, in combination with globalisation of the food market, has created a strong demand for exposure assessments of food chemicals. The food chemicals for which exposure assessments are required include food additives, pesticide residues, environmental contaminants, mycotoxins, novel food ingredients, packaging-material migrants, flavouring substances and nutrients. A wide range of methodologies exists for estimating exposure to food chemicals, and the method chosen for a particular exposure assessment is influenced by the nature of the chemical, the purpose of the assessment and the resources available. Sources of food consumption data currently used in exposure assessments range from food balance sheets to detailed food consumption surveys of individuals and duplicate-diet studies. The fitness-for-purpose of the data must be evaluated in the context of data quality and relevance to the assessment objective. Methods to combine the food consumption data with chemical concentration data may be deterministic or probabilistic. Deterministic methods estimate intakes of food chemicals that may occur in a population, but probabilistic methods provide the advantage of estimating the probability with which different levels of intake will occur. Probabilistic analysis permits the exposure assessor to model the variability (true heterogeneity) and uncertainty (lack of knowledge) that may exist in the exposure variables, including food consumption data, and thus to examine the full distribution of possible resulting exposures. Challenges for probabilistic modelling include the selection of appropriate modes of inputting food consumption data into the models. PMID:12002785

  17. Time-domain simulation of constitutive relations for nonlinear acoustics including relaxation for frequency power law attenuation media modeling

    NASA Astrophysics Data System (ADS)

    Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.

    2015-10-01

    We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.

  18. The use of food consumption data in assessments of exposure to food chemicals including the application of probabilistic modelling.

    PubMed

    Lambe, Joyce

    2002-02-01

    Emphasis on public health and consumer protection, in combination with globalisation of the food market, has created a strong demand for exposure assessments of food chemicals. The food chemicals for which exposure assessments are required include food additives, pesticide residues, environmental contaminants, mycotoxins, novel food ingredients, packaging-material migrants, flavouring substances and nutrients. A wide range of methodologies exists for estimating exposure to food chemicals, and the method chosen for a particular exposure assessment is influenced by the nature of the chemical, the purpose of the assessment and the resources available. Sources of food consumption data currently used in exposure assessments range from food balance sheets to detailed food consumption surveys of individuals and duplicate-diet studies. The fitness-for-purpose of the data must be evaluated in the context of data quality and relevance to the assessment objective. Methods to combine the food consumption data with chemical concentration data may be deterministic or probabilistic. Deterministic methods estimate intakes of food chemicals that may occur in a population, but probabilistic methods provide the advantage of estimating the probability with which different levels of intake will occur. Probabilistic analysis permits the exposure assessor to model the variability (true heterogeneity) and uncertainty (lack of knowledge) that may exist in the exposure variables, including food consumption data, and thus to examine the full distribution of possible resulting exposures. Challenges for probabilistic modelling include the selection of appropriate modes of inputting food consumption data into the models.

  19. A model of oxygen uptake kinetics in response to exercise: including a means of calculating oxygen demand/deficit/debt.

    PubMed

    Stirling, J R; Zakynthinaki, M S; Saltin, B

    2005-09-01

    We present a new model of the underlying dynamics of the oxygen uptake VO2(v,t) kinetics for various exercise intensities. This model is in the form of a set of nonlinear coupled vector fields for the VO2(v,t) and v, the derivative of the exercise intensity with respect to time. We also present a new and novel means for calculating the oxygen demand, D(v,t), and hence also the oxygen deficit and debt, given the time series of the VO2(v,t). This enables us to give better predictions for these values especially for when exercising at or close to maximal exercise intensities. Our model also allows us to predict the oxygen uptake time series given the time series for the exercise intensity as well as to investigate the oxygen uptake response to nonlinear exercise intensities. Neither of these features is possible using the currently used three-phase model. We also present a review of both the underlying physiology and the three-phase model. This includes for the first time a complete set of the analytical solutions of the three-phase model for the oxygen deficit and debt. PMID:15998492

  20. Constitutive models for granular materials including quasi-static frictional behaviour: Toward a thermodynamic theory of plasticity

    NASA Astrophysics Data System (ADS)

    Svendsen, B.; Hutter, K.; Laloui, L.

    This work deals with the thermodynamic formulation of constitutive models for materials whose quasi-static behaviour is governed by internal friction, e.g., dry granular materials. The process of internal friction is represented here phenomenologically with the help of a second-order, symmetric-tensor-valued internal variable. A general class of models for the evolution of this variable is considered, including as special cases a hypoelastic-like form for this relation as well as the hypoplastic form of Kolymbas (1991). The thermodynamic formulation is carried out in the context of the Müller-Liu entropy principle. Among other things, it is shown that for the hypoelastic-type models, a true equilibrium inelastic Cauchy stress exists. On the other hand, such a stress does not exist for the hypoplastic model due to its rate-independence and incremental non-linearity. With the help of a slight generalization of the notion of thermodynamic equilibrium, i.e., to thermodynamic ``quasi-equilibrium,'' however, such a Cauchy stress can be formulated for the hypoplastic model. As it turns out, this quasi-equilibrium for the Cauchy stress represents a thermodynamic generalization of the so-called quasi-static stress postulated for example by Goddard (1986) in the context of his viscoplastic model for a frictional-dissipative, and in particular for granular, materials.

  1. Identification of microstructural characteristics in lightweight aggregate concretes by micromechanical modelling including the interfacial transition zone (ITZ)

    SciTech Connect

    Ke, Y.; Ortola, S.; Beaucour, A.L.; Dumontet, H.

    2010-11-15

    An approach which combines both experimental techniques and micromechanical modelling is developed in order to characterise the elastic behaviour of lightweight aggregate concretes (LWAC). More than three hundred LWAC specimens with various lightweight aggregate types (5) of several volume ratios and three different mortar matrices (normal, HP, VHP) are tested. The modelling is based on iterative homogenisation process and includes the ITZ specificities experimentally observed with scanning electron microscopy (SEM). In agreement with experimental measurements, the effects of mix design parameters as well as of the interfacial transition zone (ITZ) on concrete mechanical performances are quantitatively analysed. Confrontations with experimental results allow identifying the elastic moduli of LWA which are difficult to determine experimentally. Whereas the traditional empirical formulas are not sufficiently precise, predictions of LWAC elastic behaviours computed with the micromechanical models appear in good agreement with experimental measurements.

  2. Simple analytical embedded-atom-potential model including a long-range force for fcc metals and their alloys

    NASA Astrophysics Data System (ADS)

    Cai, J.; Ye, Y. Y.

    1996-09-01

    A simple analytical embedded-atom method (EAM) model is developed. The model includes a long-range force. In this model, the electron-density function is taken as a decreasing exponential function, the two-body potential is defined as a function like a form given by Rose et al. [Phys. Rev. B 33, 7983 (1986)], and the embedding energy is assumed to be an universal form recently suggested by Banerjea and Smith. The embedding energy has a positive curvature. The model is applied to seven fcc metals (Al, Ag, Au, Cu, Ni, Pd, and Pt) and their binary alloys. All the considered properties, whether for pure metal systems or for alloy systems, are predicted to be satisfactory at least qualitatively. The model resolves the problems of Johnson's model for predicting the properties of the alloys involving metal Pd. However, more importantly, (i) by investigating the structure stability of seven fcc metals using the present model, we found that the stability energy is dominated by both the embedding energy and the pair potential for fcc-bcc stability while the pair potential dominates and is underestimated for fcc-hcp stability; and (ii) we find that the predicted total energy as a function of lattice parameter is in good agreement with the equation of state of Rose et al. for all seven fcc metals, and that this agreement is closely related to the electron density, i.e., the lower the contribution from atoms of the second-nearest neighbor to host density, the better the agreement becomes. We conclude the following: (i) for an EAM, where angle force is not considered, the long-range force is necessary for a prediction of the structure stability; or (ii) the dependence of the electron density on angle should be considered so as to improve the structure-stability energy. The conclusions are valid for all EAM models where an angle force is not considered.

  3. One Dimensional Analysis Model of a Condensing Spray Chamber Including Rocket Exhaust Using SINDA/FLUINT and CEA

    NASA Technical Reports Server (NTRS)

    Sakowski, Barbara; Edwards, Daryl; Dickens, Kevin

    2014-01-01

    Modeling droplet condensation via CFD codes can be very tedious, time consuming, and inaccurate. CFD codes may be tedious and time consuming in terms of using Lagrangian particle tracking approaches or particle sizing bins. Also since many codes ignore conduction through the droplet and or the degradating effect of heat and mass transfer if noncondensible species are present, the solutions may be inaccurate. The modeling of a condensing spray chamber where the significant size of the water droplets and the time and distance these droplets take to fall, can make the effect of droplet conduction a physical factor that needs to be considered in the model. Furthermore the presence of even a relatively small amount of noncondensible has been shown to reduce the amount of condensation [Ref 1]. It is desirable then to create a modeling tool that addresses these issues. The path taken to create such a tool is illustrated. The application of this tool and subsequent results are based on the spray chamber in the Spacecraft Propulsion Research Facility (B2) located at NASA's Plum Brook Station that tested an RL-10 engine. The platform upon which the condensation physics is modeled is SINDAFLUINT. The use of SINDAFLUINT enables the ability to model various aspects of the entire testing facility, including the rocket exhaust duct flow and heat transfer to the exhaust duct wall. The ejector pumping system of the spray chamber is also easily implemented via SINDAFLUINT. The goal is to create a transient one dimensional flow and heat transfer model beginning at the rocket, continuing through the condensing spray chamber, and finally ending with the ejector pumping system. However the model of the condensing spray chamber may be run independently of the rocket and ejector systems detail, with only appropriate mass flow boundary conditions placed at the entrance and exit of the condensing spray chamber model. The model of the condensing spray chamber takes into account droplet

  4. One Dimensional Analysis Model of a Condensing Spray Chamber Including Rocket Exhaust Using SINDA/FLUINT and CEA

    NASA Technical Reports Server (NTRS)

    Sakowski, Barbara A.; Edwards, Daryl; Dickens, Kevin

    2014-01-01

    Modeling droplet condensation via CFD codes can be very tedious, time consuming, and inaccurate. CFD codes may be tedious and time consuming in terms of using Lagrangian particle tracking approaches or particle sizing bins. Also since many codes ignore conduction through the droplet and or the degradating effect of heat and mass transfer if noncondensible species are present, the solutions may be inaccurate. The modeling of a condensing spray chamber where the significant size of the water droplets and the time and distance these droplets take to fall, can make the effect of droplet conduction a physical factor that needs to be considered in the model. Furthermore the presence of even a relatively small amount of noncondensible has been shown to reduce the amount of condensation. It is desirable then to create a modeling tool that addresses these issues. The path taken to create such a tool is illustrated. The application of this tool and subsequent results are based on the spray chamber in the Spacecraft Propulsion Research Facility (B2) located at NASA's Plum Brook Station that tested an RL-10 engine. The platform upon which the condensation physics is modeled is SINDAFLUINT. The use of SINDAFLUINT enables the ability to model various aspects of the entire testing facility, including the rocket exhaust duct flow and heat transfer to the exhaust duct wall. The ejector pumping system of the spray chamber is also easily implemented via SINDAFLUINT. The goal is to create a transient one dimensional flow and heat transfer model beginning at the rocket, continuing through the condensing spray chamber, and finally ending with the ejector pumping system. However the model of the condensing spray chamber may be run independently of the rocket and ejector systems detail, with only appropriate mass flow boundary conditions placed at the entrance and exit of the condensing spray chamber model. The model of the condensing spray chamber takes into account droplet conduction as

  5. Slope Estimation for Bivariate Longitudinal Outcomes Adjusting for Informative Right Censoring Using Discrete Survival Model: Application to the Renal Transplant Cohort.

    PubMed

    Jaffa, Miran A; Woolson, Robert F; Lipsitz, Stuart R

    2011-04-01

    Patients undergoing renal transplantation are prone to graft failure which causes lost of follow-up measures on their blood urea nitrogen and serum creatinine levels. These two outcomes are measured repeatedly over time to assess renal function following transplantation. Loss of follow-up on these bivariate measures results in informative right censoring, a common problem in longitudinal data that should be adjusted for so that valid estimates are obtained. In this study, we propose a bivariate model that jointly models these two longitudinal correlated outcomes and generates population and individual slopes adjusting for informative right censoring using a discrete survival approach. The proposed approach is applied to the clinical dataset of patients who had undergone renal transplantation. A simulation study validates the effectiveness of the approach.

  6. A three-dimensional model of the human masticatory system, including the mandible, the dentition and the temporomandibular joints.

    PubMed

    Pileicikiene, Gaivile; Varpiotas, Edvinas; Surna, Rimas; Surna, Algimantas

    2007-01-01

    The objective of this study was to create a three-dimensional mathematical model of a human masticatory system, including the mandible, the dentition and the temporomandibular joints. Object of research was one 20 year old dead man. The research was approved by Committee of bioethics (Kaunas University of Medicine). Required extent of computed tomography scanning and required high amount and high resolution of images increased X-ray radiation for the object and made this research impossible to perform on alive human. Spiral computed tomography scanning was performed to achieve two-dimensional images, necessary for creating three-dimensional model. The 3D modeling was done using the "Image pro plus" and "Imageware"software. A three-dimensional physiological (normal) model of a human masticatory system, simulating the mandible, the dentition and the temporomandibular joints was generated. This model system will be used subsequently in stress analysis comparison for the physiological and pathological systems after improvement of its physical properties. We suggest that computer simulation is a promising way to study musculoskeletal biomechanics of masticatory system.

  7. Three-Dimensional Computer Model of the Right Atrium Including the Sinoatrial and Atrioventricular Nodes Predicts Classical Nodal Behaviours

    PubMed Central

    Li, Jue; Inada, Shin; Schneider, Jurgen E.; Zhang, Henggui; Dobrzynski, Halina; Boyett, Mark R.

    2014-01-01

    The aim of the study was to develop a three-dimensional (3D) anatomically-detailed model of the rabbit right atrium containing the sinoatrial and atrioventricular nodes to study the electrophysiology of the nodes. A model was generated based on 3D images of a rabbit heart (atria and part of ventricles), obtained using high-resolution magnetic resonance imaging. Segmentation was carried out semi-manually. A 3D right atrium array model (∼3.16 million elements), including eighteen objects, was constructed. For description of cellular electrophysiology, the Rogers-modified FitzHugh-Nagumo model was further modified to allow control of the major characteristics of the action potential with relatively low computational resource requirements. Model parameters were chosen to simulate the action potentials in the sinoatrial node, atrial muscle, inferior nodal extension and penetrating bundle. The block zone was simulated as passive tissue. The sinoatrial node, crista terminalis, main branch and roof bundle were considered as anisotropic. We have simulated normal and abnormal electrophysiology of the two nodes. In accordance with experimental findings: (i) during sinus rhythm, conduction occurs down the interatrial septum and into the atrioventricular node via the fast pathway (conduction down the crista terminalis and into the atrioventricular node via the slow pathway is slower); (ii) during atrial fibrillation, the sinoatrial node is protected from overdrive by its long refractory period; and (iii) during atrial fibrillation, the atrioventricular node reduces the frequency of action potentials reaching the ventricles. The model is able to simulate ventricular echo beats. In summary, a 3D anatomical model of the right atrium containing the cardiac conduction system is able to simulate a wide range of classical nodal behaviours. PMID:25380074

  8. A new scoring system for the chances of identifying a BRCA1/2 mutation outperforms existing models including BRCAPRO

    PubMed Central

    Evans, D; Eccles, D; Rahman, N; Young, K; Bulman, M; Amir, E; Shenton, A; Howell, A; Lalloo, F

    2004-01-01

    Methods: DNA samples from affected subjects from 422 non-Jewish families with a history of breast and/or ovarian cancer were screened for BRCA1 mutations and a subset of 318 was screened for BRCA2 by whole gene screening techniques. Using a combination of results from screening and the family history of mutation negative and positive kindreds, a simple scoring system (Manchester scoring system) was devised to predict pathogenic mutations and particularly to discriminate at the 10% likelihood level. A second separate dataset of 192 samples was subsequently used to test the model's predictive value. This was further validated on a third set of 258 samples and compared against existing models. Results: The scoring system includes a cut-off at 10 points for each gene. This equates to >10% probability of a pathogenic mutation in BRCA1 and BRCA2 individually. The Manchester scoring system had the best trade-off between sensitivity and specificity at 10% prediction for the presence of mutations as shown by its highest C-statistic and was far superior to BRCAPRO. Conclusion: The scoring system is useful in identifying mutations particularly in BRCA2. The algorithm may need modifying to include pathological data when calculating whether to screen for BRCA1 mutations. It is considerably less time-consuming for clinicians than using computer models and if implemented routinely in clinical practice will aid in selecting families most suitable for DNA sampling for diagnostic testing. PMID:15173236

  9. Initial Adjustment of Taiwanese Students to the United States: The Impact of Postarrival Variables.

    ERIC Educational Resources Information Center

    Ying, Yu-Wen; Liese, Lawrence H.

    1994-01-01

    Examines the adjustment of 172 Taiwanese students during their first months in the United States. A multidimensional model is used that accounts for 39% of the variance of adjustment. Mediating factors of the model include demographics, personality, number and severity of problems experienced, prearrival preparation, social support, language…

  10. Monte Carlo simulation of peak-acceleration attenuation using a finite-fault uniform-patch model including isochrone and extremal characteristics

    USGS Publications Warehouse

    Rogers, A.M.; Perkins, D.M.

    1996-01-01

    underlying mechanisms are completely different. Because this model approximates data characteristics we have observed in an earlier study, we adjusted the parameters of the model to fit a set of smoothed peak accelerations from earthquakes worldwide. These data have not been preselected for particular magnitude or distance ranges and contain earthquake records for magnitudes ranging from about M 3 to M 8 and distance ranging from a few kilometers to about 400 km. In fitting the data, we use a trial-and-error procedure, varying the mean and standard deviation of the patch peak-acceleration distribution, the patch size, and the pulse duration. The model explicitly includes triggering bias, and the triggering threshold is also a model parameter. The data can be approximated equally well by a model that includes the isochrone effect alone, the extremal effect alone, or both effects. Inclusion of both effects is likely to be closest to reality, but because both effects produce similar results, it is not possible to determine the relative contribution of each one. In any case, the model approximates the complex features of the observed data, including a decrease in magnitude scaling with increasing magnitude at short distances and increase in magnitude scaling with magnitude at large distances.

  11. Relativistic ab initio model potential calculations including spin-orbit effects through the Wood-Boring Hamiltonian

    NASA Astrophysics Data System (ADS)

    Seijo, Luis

    1995-05-01

    Presented in this paper, is a practical implementation of the use of the Wood-Boring Hamiltonian [Phys. Rev. B 18, 2701 (1978)] in atomic and molecular ab initio core model potential calculations (AIMP), as a means to include spin-orbit relativistic effects, in addition to the mass-velocity and Darwin operators, which were already included in the spin-free version of the relativistic AIMP method. Calculations on the neutral and singly ionized atoms of the halogen elements and sixth-row p-elements Tl-Rn are presented, as well as on the one or two lowest lying states of the diatomic molecules HX, HX+, (X=F, Cl, Br, I, At) TlH, PbH, BiH, and PoH. The calculated spin-orbit splittings and bonding properties show a stable, good quality, of the size of what can be expected from an effective potential method.

  12. User's Manual for HPTAM: a Two-Dimensional Heat Pipe Transient Analysis Model, Including the Startup from a Frozen State

    NASA Technical Reports Server (NTRS)

    Tournier, Jean-Michel; El-Genk, Mohamed S.

    1995-01-01

    This report describes the user's manual for 'HPTAM,' a two-dimensional Heat Pipe Transient Analysis Model. HPTAM is described in detail in the UNM-ISNPS-3-1995 report which accompanies the present manual. The model offers a menu that lists a number of working fluids and wall and wick materials from which the user can choose. HPTAM is capable of simulating the startup of heat pipes from either a fully-thawed or frozen condition of the working fluid in the wick structure. The manual includes instructions for installing and running HPTAM on either a UNIX, MS-DOS or VMS operating system. Samples for input and output files are also provided to help the user with the code.

  13. User's manual for HPTAM: A two-dimensional Heat Pipe Transient Analysis Model, including the startup from a frozen state

    NASA Astrophysics Data System (ADS)

    Tournier, Jean-Michel; El-Genk, Mohamed S.

    1995-09-01

    This report describes the user's manual for 'HPTAM,' a two-dimensional Heat Pipe Transient Analysis Model. HPTAM is described in detail in the UNM-ISNPS-3-1995 report which accompanies the present manual. The model offers a menu that lists a number of working fluids and wall and wick materials from which the user can choose. HPTAM is capable of simulating the startup of heat pipes from either a fully-thawed or frozen condition of the working fluid in the wick structure. The manual includes instructions for installing and running HPTAM on either a UNIX, MS-DOS or VMS operating system. Samples for input and output files are also provided to help the user with the code.

  14. [Interpersonal motivation in a First Year Experience class influences freshmen's university adjustment].

    PubMed

    Nakayama, Rumiko; Nakanishi, Yoshifumi; Nagahama, Fumiyo; Nakajima, Makoto

    2015-06-01

    The present study examined the influence of interpersonal motivation on university adjustment in freshman students enrolled in a First Year Experience (FYE) class. An interpersonal motivation scale and a university adjustment (interpersonal adjustment and academic adjustment) scale were administered twice to 116 FYE students; data from the 88 students who completed both surveys were analyzed. Results from structural equation modeling indicated a causal relationship between interpersonal, motivation and university adjustment: interpersonal adjustment served as a mediator between academic adjustment and interpersonal motivation, the latter of which was assessed using the internalized motivation subscale of the Interpersonal Motivation Scale as well as the Relative Autonomy Index, which measures the autonomy in students' interpersonal attitudes. Thus, revising the FYE class curriculum to include approaches to lowering students' feelings of obligation and/or anxiety in their interpersonal interactions might improve their adjustment to university.

  15. CMAQ model performance enhanced when in-cloud secondary organic aerosol is included: comparisons of organic carbon predictions with measurements.

    PubMed

    Carlton, Annmarie G; Turpin, Barbara I; Altieri, Katye E; Seitzinger, Sybil P; Mathur, Rohit; Roselle, Shawn J; Weber, Rodney J

    2008-12-01

    Mounting evidence suggests that low-volatility (particle-phase) organic compounds form in the atmosphere through aqueous phase reactions in clouds and aerosols. Although some models have begun including secondary organic aerosol (SOA) formation through cloud processing, validation studies that compare predictions and measurements are needed. In this work, agreement between modeled organic carbon (OC) and aircraft measurements of water soluble OC improved for all 5 of the compared ICARTT NOAA-P3 flights during August when an in-cloud SOA (SOAcld) formation mechanism was added to CMAQ (a regional-scale atmospheric model). The improvement was most dramatic for the August 14th flight, a flight designed specifically to investigate clouds. During this flight the normalized mean bias for layer-averaged OC was reduced from -64 to -15% and correlation (r) improved from 0.5 to 0.6. Underpredictions of OC aloft by atmospheric models may be explained, in part, by this formation mechanism (SOAcld). OC formation aloft contributes to long-range pollution transport and has implications to radiative forcing, regional air quality and climate. PMID:19192800

  16. CMAQ model performance enhanced when in-cloud secondary organic aerosol is included: comparisons of organic carbon predictions with measurements.

    PubMed

    Carlton, Annmarie G; Turpin, Barbara I; Altieri, Katye E; Seitzinger, Sybil P; Mathur, Rohit; Roselle, Shawn J; Weber, Rodney J

    2008-12-01

    Mounting evidence suggests that low-volatility (particle-phase) organic compounds form in the atmosphere through aqueous phase reactions in clouds and aerosols. Although some models have begun including secondary organic aerosol (SOA) formation through cloud processing, validation studies that compare predictions and measurements are needed. In this work, agreement between modeled organic carbon (OC) and aircraft measurements of water soluble OC improved for all 5 of the compared ICARTT NOAA-P3 flights during August when an in-cloud SOA (SOAcld) formation mechanism was added to CMAQ (a regional-scale atmospheric model). The improvement was most dramatic for the August 14th flight, a flight designed specifically to investigate clouds. During this flight the normalized mean bias for layer-averaged OC was reduced from -64 to -15% and correlation (r) improved from 0.5 to 0.6. Underpredictions of OC aloft by atmospheric models may be explained, in part, by this formation mechanism (SOAcld). OC formation aloft contributes to long-range pollution transport and has implications to radiative forcing, regional air quality and climate.

  17. Including Thermal Fluctuations in Actomyosin Stable States Increases the Predicted Force per Motor and Macroscopic Efficiency in Muscle Modelling

    PubMed Central

    2016-01-01

    Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630

  18. A proposed model to include a residual NAPL saturation in a hysteretic capillary pressure-saturation relationship.

    PubMed

    Van Geel, P J; Roy, S D

    2002-09-01

    A residual non-aqueous phase liquid (NAPL) present in the vadose zone can act as a contaminant source for many years as the compounds of concern partition to infiltrating groundwater and air contained in the soil voids. Current pressure-saturation-relative permeability relationships do not include a residual NAPL saturation term in their formulation. This paper presents the results of series of two- and three-phase pressure cell experiments conducted to evaluate the residual NAPL saturation and its impact on the pressure-saturation relationship. A model was proposed to incorporate a residual NAPL saturation term into an existing hysteretic three-phase parametric model developed by Parker and Lenhard [Water Resour. Res. 23(12) (1987) 2187], Lenhard and Parker [Water Resour. Res. 23(12) (1987) 2197] and Lenhard [J. Contam. Hydrol. 9 (1992) 243]. The experimental results indicated that the magnitude of the residual NAPL saturation was a function of the maximum total liquid saturation reached and the water saturation. The proposed model to incorporate a residual NAPL saturation term is similar in form to the entrapment model proposed by Parker and Lenhard, which was based on an expression presented by Land [Soc. Pet. Eng. J. (June 1968) 149]. PMID:12236556

  19. HPTAM, a two-dimensional Heat Pipe Transient Analysis Model, including the startup from a frozen state

    NASA Technical Reports Server (NTRS)

    Tournier, Jean-Michel; El-Genk, Mohamed S.

    1995-01-01

    A two-dimensional Heat Pipe Transient Analysis Model, 'HPTAM,' was developed to simulate the transient operation of fully-thawed heat pipes and the startup of heat pipes from a frozen state. The model incorporates: (a) sublimation and resolidification of working fluid; (b) melting and freezing of the working fluid in the porous wick; (c) evaporation of thawed working fluid and condensation as a thin liquid film on a frozen substrate; (d) free-molecule, transition, and continuum vapor flow regimes, using the Dusty Gas Model; (e) liquid flow and heat transfer in the porous wick; and (f) thermal and hydrodynamic couplings of phases at their respective interfaces. HPTAM predicts the radius of curvature of the liquid meniscus at the liquid-vapor interface and the radial location of the working fluid level (liquid or solid) in the wick. It also includes the transverse momentum jump condition (capillary relationship of Pascal) at the liquid-vapor interface and geometrically relates the radius of curvature of the liquid meniscus to the volume fraction of vapor in the wick. The present model predicts the capillary limit and partial liquid recess (dryout) in the evaporator wick, and incorporates a liquid pooling submodel, which simulates accumulation of the excess liquid in the vapor core at the condenser end.

  20. Laboratory Studies of the Reactive Chemistry and Changing CCN Properties of Secondary Organic Aerosol, Including Model Development

    SciTech Connect

    Scot Martin

    2013-01-31

    The chemical evolution of secondary-organic-aerosol (SOA) particles and how this evolution alters their cloud-nucleating properties were studied. Simplified forms of full Koehler theory were targeted, specifically forms that contain only those aspects essential to describing the laboratory observations, because of the requirement to minimize computational burden for use in integrated climate and chemistry models. The associated data analysis and interpretation have therefore focused on model development in the framework of modified kappa-Koehler theory. Kappa is a single parameter describing effective hygroscopicity, grouping together several separate physicochemical parameters (e.g., molar volume, surface tension, and van't Hoff factor) that otherwise must be tracked and evaluated in an iterative full-Koehler equation in a large-scale model. A major finding of the project was that secondary organic materials produced by the oxidation of a range of biogenic volatile organic compounds for diverse conditions have kappa values bracketed in the range of 0.10 +/- 0.05. In these same experiments, somewhat incongruently there was significant chemical variation in the secondary organic material, especially oxidation state, as was indicated by changes in the particle mass spectra. Taken together, these findings then support the use of kappa as a simplified yet accurate general parameter to represent the CCN activation of secondary organic material in large-scale atmospheric and climate models, thereby greatly reducing the computational burden while simultaneously including the most recent mechanistic findings of laboratory studies.

  1. A non-linear mathematical model for dynamic analysis of spur gears including shaft and bearing dynamics

    NASA Technical Reports Server (NTRS)

    Ozguven, H. Nevzat

    1991-01-01

    A six-degree-of-freedom nonlinear semi-definite model with time varying mesh stiffness has been developed for the dynamic analysis of spur gears. The model includes a spur gear pair, two shafts, two inertias representing load and prime mover, and bearings. As the shaft and bearing dynamics have also been considered in the model, the effect of lateral-torsional vibration coupling on the dynamics of gears can be studied. In the nonlinear model developed several factors such as time varying mesh stiffness and damping, separation of teeth, backlash, single- and double-sided impacts, various gear errors and profile modifications have been considered. The dynamic response to internal excitation has been calculated by using the 'static transmission error method' developed. The software prepared (DYTEM) employs the digital simulation technique for the solution, and is capable of calculating dynamic tooth and mesh forces, dynamic factors for pinion and gear, dynamic transmission error, dynamic bearing forces and torsions of shafts. Numerical examples are given in order to demonstrate the effect of shaft and bearing dynamics on gear dynamics.

  2. HPTAM, a two-dimensional Heat Pipe Transient Analysis Model, including the startup from a frozen state

    NASA Astrophysics Data System (ADS)

    Tournier, Jean-Michel; El-Genk, Mohamed S.

    1995-09-01

    A two-dimensional Heat Pipe Transient Analysis Model, 'HPTAM,' was developed to simulate the transient operation of fully-thawed heat pipes and the startup of heat pipes from a frozen state. The model incorporates: (a) sublimation and resolidification of working fluid; (b) melting and freezing of the working fluid in the porous wick; (c) evaporation of thawed working fluid and condensation as a thin liquid film on a frozen substrate; (d) free-molecule, transition, and continuum vapor flow regimes, using the Dusty Gas Model; (e) liquid flow and heat transfer in the porous wick; and (f) thermal and hydrodynamic couplings of phases at their respective interfaces. HPTAM predicts the radius of curvature of the liquid meniscus at the liquid-vapor interface and the radial location of the working fluid level (liquid or solid) in the wick. It also includes the transverse momentum jump condition (capillary relationship of Pascal) at the liquid-vapor interface and geometrically relates the radius of curvature of the liquid meniscus to the volume fraction of vapor in the wick. The present model predicts the capillary limit and partial liquid recess (dryout) in the evaporator wick, and incorporates a liquid pooling submodel, which simulates accumulation of the excess liquid in the vapor core at the condenser end.

  3. Including Thermal Fluctuations in Actomyosin Stable States Increases the Predicted Force per Motor and Macroscopic Efficiency in Muscle Modelling.

    PubMed

    Marcucci, Lorenzo; Washio, Takumi; Yanagida, Toshio

    2016-09-01

    Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630

  4. Modeling the significance of including C redistribution when determining changes in net carbon storage along a cultivated toposequence

    NASA Astrophysics Data System (ADS)

    Chirinda, Ngonidzashe; Olesen, Jørgen E.; Heckrath, Goswin; Paradelo Pérez, Marcos; Taghizadeh-Toosi, Arezoo

    2016-04-01

    Globally, soil carbon (C) reserves are second only to those in the ocean, and accounts for a significant C reservoir. In the case of arable soils, the quantity of stored C is influenced by various factors (e.g. management practices). Currently, the topography related influences on in-field soil C dynamics remain largely unknown. However, topography is known to influence a multiplicity of factors that regulate C input, storage and redistribution. To understand the patterns and untangle the complexity of soil C dynamics in arable landscapes, our study was conducted with soils from shoulderslope and footslope positions on a 7.1 ha winter wheat field in western Denmark. We first collected soil samples from shoulderslope and footslope positions with various depth intervals down to 100 cm and analyzed them for physical and chemical properties including texture and soil organic C contents. In-situ carbon dioxide (CO2) concentrations were measured at different soil profile depths at both positions for a year. Soil moisture content and temperature at 5 and 40 cm depth was measured continuously. Additionally, surface soil CO2 fluxes at shoulderslope and footslope positions were measured. We then used measurement data collected from the two landscape positions to calibrate the one-dimensional mechanistic model SOILCO2 module of the HYDRUS-1D software package and obtained soil CO2 fluxes from soil profile at two landscape positions. Furthermore, we tested whether the inclusion of vertical and lateral soil C movement improved the modeling of C dynamics in cultivated landscapes. For that, soil profile CO2 fluxes were compared with those obtained using a simple process-based soil whole profile C model, C-TOOL, which was modified to include vertical and lateral movement of C on landscape. Our results highlight the need to consider vertical and lateral soil C movement in the modeling of C dynamics in cultivated landscapes, for better qualification of net carbon storage.

  5. GIS-based models for water quantity and quality assessment in the Júcar River Basin, Spain, including climate change effects.

    PubMed

    Ferrer, Javier; Pérez-Martín, Miguel A; Jiménez, Sara; Estrela, Teodoro; Andreu, Joaquín

    2012-12-01

    This paper describes two different GIS models - one stationary (GeoImpress) and the other non-stationary (Patrical) - that assess water quantity and quality in the Júcar River Basin District, a large river basin district (43,000km(2)) located in Spain. It aims to analyze the status of surface water (SW) and groundwater (GW) bodies in relation to the European Water Framework Directive (WFD) and to support measures to achieve the WFD objectives. The non-stationary model is used for quantitative analysis of water resources, including long-term water resource assessment; estimation of available GW resources; and evaluation of climate change impact on water resources. The main results obtained are the following: recent water resources have been reduced by approximately 18% compared to the reference period 1961-1990; the GW environmental volume required to accomplish the WFD objectives is approximately 30% of the GW annual resources; and the climate change impact on water resources for the short-term (2010-2040), based on a dynamic downscaling A1B scenario, implies a reduction in water resources by approximately 19% compared to 1990-2000 and a reduction of approximately 40-50% for the long-term (2070-2100), based on dynamic downscaling A2 and B2 scenarios. The model also assesses the impact of various fertilizer application scenarios on the status of future GW quality (nitrate) and if these future statuses will meet the WFD requirements. The stationary model generates data on the actual and future chemical status of SW bodies in the river basin according to the modeled scenarios and reflects the implementation of different types of measures to accomplish the Urban Waste Water Treatment Directive and the WFD. Finally, the selection and prioritization of additional measures to accomplish the WFD are based on cost-effectiveness analysis.

  6. GIS-based models for water quantity and quality assessment in the Júcar River Basin, Spain, including climate change effects.

    PubMed

    Ferrer, Javier; Pérez-Martín, Miguel A; Jiménez, Sara; Estrela, Teodoro; Andreu, Joaquín

    2012-12-01

    This paper describes two different GIS models - one stationary (GeoImpress) and the other non-stationary (Patrical) - that assess water quantity and quality in the Júcar River Basin District, a large river basin district (43,000km(2)) located in Spain. It aims to analyze the status of surface water (SW) and groundwater (GW) bodies in relation to the European Water Framework Directive (WFD) and to support measures to achieve the WFD objectives. The non-stationary model is used for quantitative analysis of water resources, including long-term water resource assessment; estimation of available GW resources; and evaluation of climate change impact on water resources. The main results obtained are the following: recent water resources have been reduced by approximately 18% compared to the reference period 1961-1990; the GW environmental volume required to accomplish the WFD objectives is approximately 30% of the GW annual resources; and the climate change impact on water resources for the short-term (2010-2040), based on a dynamic downscaling A1B scenario, implies a reduction in water resources by approximately 19% compared to 1990-2000 and a reduction of approximately 40-50% for the long-term (2070-2100), based on dynamic downscaling A2 and B2 scenarios. The model also assesses the impact of various fertilizer application scenarios on the status of future GW quality (nitrate) and if these future statuses will meet the WFD requirements. The stationary model generates data on the actual and future chemical status of SW bodies in the river basin according to the modeled scenarios and reflects the implementation of different types of measures to accomplish the Urban Waste Water Treatment Directive and the WFD. Finally, the selection and prioritization of additional measures to accomplish the WFD are based on cost-effectiveness analysis. PMID:22959072

  7. Computational Modeling of Open-Irrigated Electrodes for Radiofrequency Cardiac Ablation Including Blood Motion-Saline Flow Interaction.

    PubMed

    González-Suárez, Ana; Berjano, Enrique; Guerra, Jose M; Gerardo-Giorda, Luca

    2016-01-01

    Radiofrequency catheter ablation (RFCA) is a routine treatment for cardiac arrhythmias. During RFCA, the electrode-tissue interface temperature should be kept below 80 °C to avoid thrombus formation. Open-irrigated electrodes facilitate power delivery while keeping low temperatures around the catheter. No computational model of an open-irrigated electrode in endocardial RFCA accounting for both the saline irrigation flow and the blood motion in the cardiac chamber has been proposed yet. We present the first computational model including both effects at once. The model has been validated against existing experimental results. Computational results showed that the surface lesion width and blood temperature are affected by both the electrode design and the irrigation flow rate. Smaller surface lesion widths and blood temperatures are obtained with higher irrigation flow rate, while the lesion depth is not affected by changing the irrigation flow rate. Larger lesions are obtained with increasing power and the electrode-tissue contact. Also, larger lesions are obtained when electrode is placed horizontally. Overall, the computational findings are in close agreement with previous experimental results providing an excellent tool for future catheter research.

  8. Computational Modeling of Open-Irrigated Electrodes for Radiofrequency Cardiac Ablation Including Blood Motion-Saline Flow Interaction

    PubMed Central

    González-Suárez, Ana; Berjano, Enrique; Guerra, Jose M.; Gerardo-Giorda, Luca

    2016-01-01

    Radiofrequency catheter ablation (RFCA) is a routine treatment for cardiac arrhythmias. During RFCA, the electrode-tissue interface temperature should be kept below 80°C to avoid thrombus formation. Open-irrigated electrodes facilitate power delivery while keeping low temperatures around the catheter. No computational model of an open-irrigated electrode in endocardial RFCA accounting for both the saline irrigation flow and the blood motion in the cardiac chamber has been proposed yet. We present the first computational model including both effects at once. The model has been validated against existing experimental results. Computational results showed that the surface lesion width and blood temperature are affected by both the electrode design and the irrigation flow rate. Smaller surface lesion widths and blood temperatures are obtained with higher irrigation flow rate, while the lesion depth is not affected by changing the irrigation flow rate. Larger lesions are obtained with increasing power and the electrode-tissue contact. Also, larger lesions are obtained when electrode is placed horizontally. Overall, the computational findings are in close agreement with previous experimental results providing an excellent tool for future catheter research. PMID:26938638

  9. Computational Modeling of Open-Irrigated Electrodes for Radiofrequency Cardiac Ablation Including Blood Motion-Saline Flow Interaction.

    PubMed

    González-Suárez, Ana; Berjano, Enrique; Guerra, Jose M; Gerardo-Giorda, Luca

    2016-01-01

    Radiofrequency catheter ablation (RFCA) is a routine treatment for cardiac arrhythmias. During RFCA, the electrode-tissue interface temperature should be kept below 80 °C to avoid thrombus formation. Open-irrigated electrodes facilitate power delivery while keeping low temperatures around the catheter. No computational model of an open-irrigated electrode in endocardial RFCA accounting for both the saline irrigation flow and the blood motion in the cardiac chamber has been proposed yet. We present the first computational model including both effects at once. The model has been validated against existing experimental results. Computational results showed that the surface lesion width and blood temperature are affected by both the electrode design and the irrigation flow rate. Smaller surface lesion widths and blood temperatures are obtained with higher irrigation flow rate, while the lesion depth is not affected by changing the irrigation flow rate. Larger lesions are obtained with increasing power and the electrode-tissue contact. Also, larger lesions are obtained when electrode is placed horizontally. Overall, the computational findings are in close agreement with previous experimental results providing an excellent tool for future catheter research. PMID:26938638

  10. Modelled hydraulic redistribution by sunflower (Helianthus annuus L.) matches observed data only after including night-time transpiration.

    PubMed

    Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele

    2014-04-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR.

  11. Evaluation and optimization of a micro-tubular solid oxide fuel cell stack model including an integrated cooling system

    NASA Astrophysics Data System (ADS)

    Hering, Martin; Brouwer, Jacob; Winkler, Wolfgang

    2016-01-01

    A micro-tubular solid oxide fuel cell stack model including an integrated cooling system was developed using a quasi three-dimensional, spatially resolved, transient thermodynamic, physical and electrochemical model that accounts for the complex geometrical relations between the cells and cooling-tubes. For the purpose of model evaluation, reference operating, geometrical and material properties are determined. The reference stack design is composed of 3294 cells, with a diameter of 2 mm, and 61 cooling-tubes. The stack is operated at a power density of 300 mW/cm2 and air is used as the cooling fluid inside the integrated cooling system. Regarding the performance, the reference design achieves an electrical stack efficiency of around 57% and a power output of 1.1 kW. The maximum occurring temperature of the positive electrode electrolyte negative electrode (PEN)-structure is 1369 K. As a result of a design of experiments, parameters of a best-case design are determined. The best-case design achieves a comparable power output of 1.1 kW with an electrical efficiency of 63% and a maximum occurring temperature of the PEN-structure of 1268 K. Nevertheless, the best-case design has an increased volume based on the higher diameter of 3 mm and increased spacing between the cells.

  12. Including local rainfall dynamics and uncertain boundary conditions into a 2-D regional-local flood modelling cascade

    NASA Astrophysics Data System (ADS)

    Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo

    2016-04-01

    Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed

  13. Including the effects of elastic compressibility and volume changes in geodynamical modeling of crust-lithosphere-mantle deformation

    NASA Astrophysics Data System (ADS)

    de Monserrat, Albert; Morgan, Jason P.

    2016-04-01

    Materials in Earth's interior are exposed to thermomechanical (e.g. variations in stress/pressure and temperature) and chemical (e.g. phase changes, serpentinization, melting) processes that are associated with volume changes. Most geodynamical codes assume the incompressible Boussinesq approximation, where changes in density due to temperature or phase change effect buoyancy, yet volumetric changes are not allowed, and mass is not locally conserved. Elastic stresses induced by volume changes due to thermal expansion, serpentinization, and melt intrusion should cause 'cold' rocks to brittlely fail at ~1% strain. When failure/yielding is an important rheological feature, we think it plausible that volume-change-linked stresses may have a significant influence on the localization of deformation. Here we discuss a new Lagrangian formulation for "elasto-compressible -visco-plastic" flow. In this formulation, the continuity equation has been generalised from a Boussinesq incompressible formulation to include recoverable, elastic, volumetric deformations linked to the local state of mean compressive stress. This formulation differs from the 'anelastic approximation' used in compressible viscous flow in that pressure- and temperature- dependent volume changes are treated as elastic deformation for a given pressure, temperature, and composition/phase. This leads to a visco-elasto-plastic formulation that can model the effects of thermal stresses, pressure-dependent volume changes, and local phase changes. We use a modified version of the (Miliman-based) FEM code M2TRI to run a set of numerical experiments for benchmarking purposes. Three benchmarks are being used to assess the accuracy of this formulation: (1) model the effects on density of a compressible mantle under the influence of gravity; (2) model the deflection of a visco-elastic beam under the influence of gravity, and its recovery when gravitational loading is artificially removed; (3) Modelling the stresses

  14. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  15. Relationship between efficiency and clinical effectiveness indicators in an adjusted model of resource consumption: a cross-sectional study

    PubMed Central

    2013-01-01

    Background Adjusted clinical groups (ACG®) have been widely used to adjust resource distribution; however, the relationship with effectiveness has been questioned. The purpose of the study was to measure the relationship between efficiency assessed by ACG® and a clinical effectiveness indicator in adults attended in Primary Health Care Centres (PHCs). Methods Research design: cross-sectional study. Subjects: 196, 593 patients aged >14 years in 13 PHCs in Catalonia (Spain). Measures: Age, sex, PHC, basic care team (BCT), visits, episodes (diagnoses), and total direct costs of PHC care and co-morbidity as measured by ACG® indicators: Efficiency indices for costs, visits, and episodes (costs EI, visits EI, episodes EI); a complexity or risk index (RI); and effectiveness measured by a general synthetic index (SI). The relationship between EI, RI, and SI in each PHC and BCT was measured by multiple correlation coefficients (r). Results In total, 56 of the 106 defined ACG® were present in the study population, with five corresponding to 44.5% of the patients, 11 to 68.0% of patients, and 30 present in less than 0.5% of the sample. The RI in each PHC ranged from 0.9 to 1.1. Costs, visits, and episodes had similar trends for efficiency in six PHCs. There was moderate correlation between costs EI and visits EI (r = 0.59). SI correlation with episodes EI and costs EI was moderate (r = 0.48 and r = −0.34, respectively) and was r = −0.14 for visits EI. Correlation between RI and SI was r = 0.29. Conclusions The Efficiency and Effectiveness ACG® indicators permit a comparison of primary care processes between PHCs. Acceptable correlation exists between effectiveness and indicators of efficiency in episodes and costs. PMID:24139144

  16. Burden of Six Healthcare-Associated Infections on European Population Health: Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study

    PubMed Central

    Eckmanns, Tim; Abu Sin, Muna; Ducomble, Tanja; Harder, Thomas; Sixtensson, Madlen; Velasco, Edward; Weiß, Bettina; Kramarz, Piotr; Monnet, Dominique L.; Kretzschmar, Mirjam E.; Suetens, Carl

    2016-01-01

    Background Estimating the burden of healthcare-associated infections (HAIs) compared to other communicable diseases is an ongoing challenge given the need for good quality data on the incidence of these infections and the involved comorbidities. Based on the methodology of the Burden of Communicable Diseases in Europe (BCoDE) project and 2011–2012 data from the European Centre for Disease Prevention and Control (ECDC) point prevalence survey (PPS) of HAIs and antimicrobial use in European acute care hospitals, we estimated the burden of six common HAIs. Methods and Findings The included HAIs were healthcare-associated pneumonia (HAP), healthcare-associated urinary tract infection (HA UTI), surgical site infection (SSI), healthcare-associated Clostridium difficile infection (HA CDI), healthcare-associated neonatal sepsis, and healthcare-associated primary bloodstream infection (HA primary BSI). The burden of these HAIs was measured in disability-adjusted life years (DALYs). Evidence relating to the disease progression pathway of each type of HAI was collected through systematic literature reviews, in order to estimate the risks attributable to HAIs. For each of the six HAIs, gender and age group prevalence from the ECDC PPS was converted into incidence rates by applying the Rhame and Sudderth formula. We adjusted for reduced life expectancy within the hospital population using three severity groups based on McCabe score data from the ECDC PPS. We estimated that 2,609,911 new cases of HAI occur every year in the European Union and European Economic Area (EU/EEA). The cumulative burden of the six HAIs was estimated at 501 DALYs per 100,000 general population each year in EU/EEA. HAP and HA primary BSI were associated with the highest burden and represented more than 60% of the total burden, with 169 and 145 DALYs per 100,000 total population, respectively. HA UTI, SSI, HA CDI, and HA primary BSI ranked as the third to sixth syndromes in terms of burden of disease

  17. The FiR 1 photon beam model adjustment according to in-air spectrum measurements with the Mg(Ar) ionization chamber.

    PubMed

    Koivunoro, H; Schmitz, T; Hippeläinen, E; Liu, Y-H; Serén, T; Kotiluoto, P; Auterinen, I; Savolainen, S

    2014-06-01

    The mixed neutron-photon beam of FiR 1 reactor is used for boron-neutron capture therapy (BNCT) in Finland. A beam model has been defined for patient treatment planning and dosimetric calculations. The neutron beam model has been validated with an activation foil measurements. The photon beam model has not been thoroughly validated against measurements, due to the fact that the beam photon dose rate is low, at most only 2% of the total weighted patient dose at FiR 1. However, improvement of the photon dose detection accuracy is worthwhile, since the beam photon dose is of concern in the beam dosimetry. In this study, we have performed ionization chamber measurements with multiple build-up caps of different thickness to adjust the calculated photon spectrum of a FiR 1 beam model.

  18. Development of a computational framework to adjust the pre-impact spine posture of a whole-body model based on cadaver tests data.

    PubMed

    Poulard, David; Subit, Damien; Donlon, John-Paul; Kent, Richard W

    2015-02-26

    A method was developed to adjust the posture of a human numerical model to match the pre-impact posture of a human subject. The method involves pulling cables to prescribe the position and orientation of the head, spine and pelvis during a simulation. Six postured models matching the pre-impact posture measured on subjects tested in previous studies were created from a human numerical model. Posture scalars were measured on pre- and after applying the method to evaluate its efficiency. The lateral leaning angle θL defined between T1 and the pelvis in the coronal plane was found to be significantly improved after application with an average difference of 0.1±0.1° with the PMHS (4.6±2.7° before application). This method will be applied in further studies to analyze independently the contribution of pre-impact posture on impact response using human numerical models.

  19. Development of a computational framework to adjust the pre-impact spine posture of a whole-body model based on cadaver tests data.

    PubMed

    Poulard, David; Subit, Damien; Donlon, John-Paul; Kent, Richard W

    2015-02-26

    A method was developed to adjust the posture of a human numerical model to match the pre-impact posture of a human subject. The method involves pulling cables to prescribe the position and orientation of the head, spine and pelvis during a simulation. Six postured models matching the pre-impact posture measured on subjects tested in previous studies were created from a human numerical model. Posture scalars were measured on pre- and after applying the method to evaluate its efficiency. The lateral leaning angle θL defined between T1 and the pelvis in the coronal plane was found to be significantly improved after application with an average difference of 0.1±0.1° with the PMHS (4.6±2.7° before application). This method will be applied in further studies to analyze independently the contribution of pre-impact posture on impact response using human numerical models. PMID:25596635

  20. Including swell-shrink dynamics in dual-permeability numerical modeling of preferential water flow and solute transport in soils

    NASA Astrophysics Data System (ADS)

    Coppola, Antonio; Comegna, Alessandro; Gerke, Horst; Basile, Angelo

    2015-04-01

    The classical dual-permeability approach introduced by Gerke and van Genuchten for modeling water flow and solute transport in porous media with preferential flow pathways, was extended to account for shrinking effects on macropore and matrix domain hydraulic properties. Conceptually, the soil is treated as a dual-permeability bulk porous medium consisting of two dynamic interacting pore domains (1) the fracture (from shrinkage) pore domain and (2) the aggregate (interparticles plus structural) or matrix pore domain, respectively. The model assumes that the swell-shrink dynamics is represented by the inversely proportional volume changes of the fracture and matrix domains, while the overall porosity of the total soil, and hence the layer thickness, remains constant. Swell-shrink dynamics was incorporated in the model by either changing the coupled domain-specific hydraulic properties according to the shrinkage characteristics of the matrix, or partly by allowing the fractional contribution of the two domains to change with the pressure head. As a first step, the hysteresis in the swell-shrink dynamics was not included. We also assumed that the aggregate behavior and its hydraulic properties depend only on the average aggregate water content and not on its internal real distribution. Compared to the rigid approach, the combined effect of the changing weight and that of the void ratio on the hydraulic properties in the shrinking approach induce much larger and deeper water and solute transfer from the fractures to the matrix during wetting processes. The analysis shows a systematic underestimation of the wetting front propagation times, as well as of the solute travel times and concentrations when the volume of the aggregate domain is assumed to remain constant. The combined and interacting effects of the dynamic weight and the evolution of matrix pressure head in the shrinking approach is responsible for a bimodal behavior of the water exchange term, which in turn

  1. Validation of the internalization of the Model Minority Myth Measure (IM-4) and its link to academic performance and psychological adjustment among Asian American adolescents.

    PubMed

    Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy

    2015-04-01

    There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes.

  2. Effect of Adding McKenzie Syndrome, Centralization, Directional Preference, and Psychosocial Classification Variables to a Risk-Adjusted Model Predicting Functional Status Outcomes for Patients With Lumbar Impairments.

    PubMed

    Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L

    2016-09-01

    Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in

  3. Effect of Adding McKenzie Syndrome, Centralization, Directional Preference, and Psychosocial Classification Variables to a Risk-Adjusted Model Predicting Functional Status Outcomes for Patients With Lumbar Impairments.

    PubMed

    Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L

    2016-09-01

    Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in

  4. Quantifying the Earthquake Clustering that Independent Sources with Stationary Rates (as Included in Current Risk Models) Can Produce.

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.; Apel, E. V.; Muir-Wood, R.

    2014-12-01

    The recent Canterbury earthquake sequence (CES) renewed public and academic awareness concerning the clustered nature of seismicity. Multiple event occurrence in short time and space intervals is reminiscent of aftershock sequences, but aftershock is a statistical definition, not a label one can give an earthquake in real-time. Aftershocks are defined collectively as what creates the Omori event rate decay after a large event or are defined as what is taken away as "dependent events" using a declustering method. It is noteworthy that depending on the declustering method used on the Canterbury earthquake sequence, the number of independent events varies a lot. This lack of unambiguous definition of aftershocks leads to the need to investigate the amount of clustering inherent in "declustered" risk models. This is the task we concentrate on in this contribution. We start from a background source model for the Canterbury region, in which 1) centroids of events of given magnitude are distributed using a latin-hypercube lattice, 2) following the range of preferential orientations determined from stress maps and focal mechanism, 3) with length determined using the local scaling relationship and 4) rates from a and b values derived from the declustered pre-2010 catalog. We then proceed to create tens of thousands of realizations of 6 to 20 year periods, and we define criteria to identify which successions of events in the region would be perceived as a sequence. Note that the spatial clustering expected is a lower end compared to a fully uniform distribution of events. Then we perform the same exercise with rates and b-values determined from the catalog including the CES. If the pre-2010 catalog was long (or rich) enough, then the computed "stationary" rates calculated from it would include the CES declustered events (by construction, regardless of the physical meaning of or relationship between those events). In regions of low seismicity rate (e.g., Canterbury before

  5. Results of the oxygen Fick method in a closed blood circulation model including "total arteriovenous diffusive shunt of oxygen".

    PubMed

    Ozbek, Mustafa; Akay, Ahmet

    2004-09-01

    It is considered that arteriovenous diffusive shunts of oxygen may cause inaccuracy of the oxygen Fick method as[Formula: see text] where[Formula: see text] is the pulmonary oxygen uptake,[Formula: see text] is the cardiac output, and CaO(2) and CvO(2) are the arterial and venous oxygen contents, respectively.A simple circulation model, including the whole circulation with nine well-mixed compartments (C1, ... C9), is constructed: the[Formula: see text] is assigned as constant as 6000 ml min(-1); the blood portions of 60 ml move at an interval of 600 ms. C1 and C2 compartments, each having 60 ml volume, represent the blood of pulmonary microcirculation, C3 represents the arterial blood with a volume of 1500 ml, C4, ..., C8, each also having a volume of 60 ml, represent the blood of peripheral microcirculation, whereas C9 represents the venous blood with a volume of 3000 ml. The pulmonary oxygen uptake[Formula: see text], related to C1 and C2, the oxygen release[Formula: see text], related to C4,...,C8, as well as a "total arteriovenous diffusive shunt of oxygen"[Formula: see text], from the arterial blood (C3) to the venous blood (C9), are calculated simultaneously. The alveolar gas has a constant oxygen partial pressure, and the pulmonary diffusion capacity is also constant; similar to modeling the pulmonry, oxygen diffusion, constant partial oxygen pressures for all peripheral tissues as well as constant diffusion capacities for all peripheral oxygen diffusion are also assigned. The diffusion capacities for the[Formula: see text] (between C3 and C9) are arbitrarily assigned.The Fick method gives incorrect results depending on the total arteriovenous diffusive shunt of oxygen[Formula: see text]. But the mechanism determining the magnitude of[Formula: see text] remains unclear.

  6. Thomas Kuhn's 'Structure of Scientific Revolutions' applied to exercise science paradigm shifts: example including the Central Governor Model.

    PubMed

    Pires, Flávio de Oliveira; de Oliveira Pires, Flávio

    2013-07-01

    According to Thomas Kuhn, the scientific progress of any discipline could be distinguished by a pre-paradigm phase, a normal science phase and a revolution phase. The science advances when a scientific revolution takes place after silent period of normal science and the scientific community moves ahead to a paradigm shift. I suggest there has been a recent change of course in the direction of the exercise science. According to the 'current paradigm', exercise would be probably limited by alterations in either central command or peripheral skeletal muscles, and fatigue would be developed in a task-dependent manner. Instead, the central governor model (GCM) has proposed that all forms of exercise are centrally-regulated, the central nervous system would calculate the metabolic cost required to complete a task in order to avoid catastrophic body failure. Some have criticized the CGM and supported the traditional interpretation, but recently the scientific community appears to have begun an intellectual trajectory to accept this theory. First, the increased number of citations of articles that have supported the CGM could indicate that the community has changed the focus. Second, relevant journals have devoted special editions to promote the debate on subjects challenged by the CGM. Finally, scientists from different fields have recognized mechanisms included in the CGM to understand the exercise limits. Given the importance of the scientific community in demarcating a Kuhnian paradigm shift, I suggest that these three aspects could indicate an increased acceptance of a centrally-regulated effort model, to understand the limits of exercise.

  7. 42 CFR 422.310 - Risk adjustment data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Risk adjustment data. 422.310 Section 422.310....310 Risk adjustment data. (a) Definition of risk adjustment data. Risk adjustment data are all data that are used in the development and application of a risk adjustment payment model. (b)...

  8. Groundwater Flow Model Including Deeper Part On The Basis Of Field Data - Especially Determination Of Boundary Conditions And Hydraulic Parameters-

    NASA Astrophysics Data System (ADS)

    Machida, I.; Itadera, K.

    2005-12-01

    The final purpose of our study is to clarify the quantitative groundwater flow including deeper part, 500-1000m depth, in the basin in caldera on the mountain. The computer simulation is one the best methods to achieve this purpose. In such a study, however, it is difficult to determine the boundary conditions and hydraulic properties of geology in deeper part, generally. For this reason, we selected Gora basin as a study area, because many hydraulic data have been stored for more than 30 years in this basin. In addition, because the volcanic thermal water is mainly formed by mixing of groundwater and thermal component, the study for deeper groundwater flow can contribute the agenda for the protection of thermal groundwater which is regards as a limited resource. Gora basin, in Hakone area is one of the most famous spa (a resort having thermal groundwater or hot springs) in Japan. The area of the basin is approximately 10 square kilometers and has more than 200 deep wells. In our study, at first, the dataset of hydraulic head was created by using the stored data to construct the conceptual model for groundwater flow. The potential distribution exhibited that the groundwater flowed downward dominant. And the geomorphology can be regarded as hydraulic boundary even in deer part, that is to say, we can regard the ridge as no flow boundary in simulation model. Next, for quantitative understanding of groundwater flow, we need to obtain not only boundary conditions but also hydraulic property of geology, for example, hydraulic conductivity, K, as one of the important parameters. Generally, such a parameter has not been measured in past survey. So, we used the belief method for calculating the hydraulic conductivity by using the data of thermal logging test, which was similar to a slug test. As results of the analysis, the close relationship between K and well depth were obtained. This result implies that the K value depends on the overburden pressure of geology. That is

  9. Bayesian Geostatistical Model-Based Estimates of Soil-Transmitted Helminth Infection in Nigeria, Including Annual Deworming Requirements

    PubMed Central

    Oluwole, Akinola S.; Ekpo, Uwem F.; Karagiannis-Voules, Dimitrios-Alexios; Abe, Eniola M.; Olamiju, Francisca O.; Isiyaku, Sunday; Okoronkwo, Chukwu; Saka, Yisa; Nebe, Obiageli J.; Braide, Eka I.; Mafiana, Chiedu F.; Utzinger, Jürg; Vounatsou, Penelope

    2015-01-01

    Background The acceleration of the control of soil-transmitted helminth (STH) infections in Nigeria, emphasizing preventive chemotherapy, has become imperative in light of the global fight against neglected tropical diseases. Predictive risk maps are an important tool to guide and support control activities. Methodology STH infection prevalence data were obtained from surveys carried out in 2011 using standard protocols. Data were geo-referenced and collated in a nationwide, geographic information system database. Bayesian geostatistical models with remotely sensed environmental covariates and variable selection procedures were utilized to predict the spatial distribution of STH infections in Nigeria. Principal Findings We found that hookworm, Ascaris lumbricoides, and Trichuris trichiura infections are endemic in 482 (86.8%), 305 (55.0%), and 55 (9.9%) locations, respectively. Hookworm and A. lumbricoides infection co-exist in 16 states, while the three species are co-endemic in 12 states. Overall, STHs are endemic in 20 of the 36 states of Nigeria, including the Federal Capital Territory of Abuja. The observed prevalence at endemic locations ranged from 1.7% to 51.7% for hookworm, from 1.6% to 77.8% for A. lumbricoides, and from 1.0% to 25.5% for T. trichiura. Model-based predictions ranged from 0.7% to 51.0% for hookworm, from 0.1% to 82.6% for A. lumbricoides, and from 0.0% to 18.5% for T. trichiura. Our models suggest that day land surface temperature and dense vegetation are important predictors of the spatial distribution of STH infection in Nigeria. In 2011, a total of 5.7 million (13.8%) school-aged children were predicted to be infected with STHs in Nigeria. Mass treatment at the local government area level for annual or bi-annual treatment of the school-aged population in Nigeria in 2011, based on World Health Organization prevalence thresholds, were estimated at 10.2 million tablets. Conclusions/Significance The predictive risk maps and estimated

  10. Differences among skeletal muscle mass indices derived from height-, weight-, and body mass index-adjusted models in assessing sarcopenia

    PubMed Central

    Kim, Kyoung Min; Jang, Hak Chul; Lim, Soo

    2016-01-01

    Aging processes are inevitably accompanied by structural and functional changes in vital organs. Skeletal muscle, which accounts for 40% of total body weight, deteriorates quantitatively and qualitatively with aging. Skeletal muscle is known to play diverse crucial physical and metabolic roles in humans. Sarcopenia is a condition characterized by significant loss of muscle mass and strength. It is related to subsequent frailty and instability in the elderly population. Because muscle tissue is involved in multiple functions, sarcopenia is closely related to various adverse health outcomes. Along with increasing recognition of the clinical importance of sarcopenia, several international study groups have recently released their consensus on the definition and diagnosis of sarcopenia. In practical terms, various skeletal muscle mass indices have been suggested for assessing sarcopenia: appendicular skeletal muscle mass adjusted for height squared, weight, or body mass index. A different prevalence and different clinical implications of sarcopenia are highlighted by each definition. The discordances among these indices have emerged as an issue in defining sarcopenia, and a unifying definition for sarcopenia has not yet been attained. This review aims to compare these three operational definitions and to introduce an optimal skeletal muscle mass index that reflects the clinical implications of sarcopenia from a metabolic perspective. PMID:27334763

  11. Receptor modelling of fine particles in southern England using CMB including comparison with AMS-PMF factors

    NASA Astrophysics Data System (ADS)

    Yin, J.; Cumberland, S. A.; Harrison, R. M.; Allan, J.; Young, D. E.; Williams, P. I.; Coe, H.

    2015-02-01

    PM2.5 was collected during a winter campaign at two southern England sites, urban background North Kensington (NK) and rural Harwell (HAR), in January-February 2012. Multiple organic and inorganic source tracers were analysed and used in a Chemical Mass Balance (CMB) model, which apportioned seven separate primary sources, that explained on average 53% (NK) and 56% (HAR) of the organic carbon (OC), including traffic, woodsmoke, food cooking, coal combustion, vegetative detritus, natural gas and dust/soil. With the addition of source tracers for secondary biogenic aerosol at the NK site, 79% of organic carbon was accounted for. Secondary biogenic sources were represented by oxidation products of α-pinene and isoprene, but only the former made a substantial contribution to OC. Particle source contribution estimates for PM2.5 mass were obtained by the conversion of the OC estimates and combining with inorganic components ammonium nitrate, ammonium sulfate and sea salt. Good mass closure was achieved with 81% (92% with the addition of the secondary biogenic source) and 83% of the PM2.5 mass explained at NK and HAR respectively, with the remainder being secondary organic matter. While the most important sources of OC are vehicle exhaust (21 and 16%) and woodsmoke (15 and 28%) at NK and HAR respectively, food cooking emissions are also significant, particularly at the urban NK site (11% of OC), in addition to the secondary biogenic source, only measured at NK, which represented about 26%. In comparison, the major source components for PM2.5 at NK and HAR are inorganic ammonium salts (51 and 56%), vehicle exhaust emissions (8 and 6%), secondary biogenic (10% measured at NK only), woodsmoke (4 and 7%) and sea salt (7 and 8%), whereas food cooking (4 and 1%) showed relatively smaller contributions to PM2.5. Results from the CMB model were compared with source contribution estimates derived from the AMS-PMF method. The overall mass of organic matter accounted for is rather

  12. Multivariate Models of Parent-Late Adolescent Gender Dyads: The Importance of Parenting Processes in Predicting Adjustment

    ERIC Educational Resources Information Center

    McKinney, Cliff; Renk, Kimberly

    2008-01-01

    Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…

  13. Modeling the Human Kinetic Adjustment Factor for Inhaled Volatile Organic Chemicals: Whole Population Approach versus Distinct Subpopulation Approach

    PubMed Central

    Valcke, M.; Nong, A.; Krishnan, K.

    2012-01-01

    The objective of this study was to evaluate the impact of whole- and sub-population-related variabilities on the determination of the human kinetic adjustment factor (HKAF) used in risk assessment of inhaled volatile organic chemicals (VOCs). Monte Carlo simulations were applied to a steady-state algorithm to generate population distributions for blood concentrations (CAss) and rates of metabolism (RAMs) for inhalation exposures to benzene (BZ) and 1,4-dioxane (1,4-D). The simulated population consisted of various proportions of adults, elderly, children, neonates and pregnant women as per the Canadian demography. Subgroup-specific input parameters were obtained from the literature and P3M software. Under the “whole population” approach, the HKAF was computed as the ratio of the entire population's upper percentile value (99th, 95th) of dose metrics to the median value in either the entire population or the adult population. Under the “distinct subpopulation” approach, the upper percentile values in each subpopulation were considered, and the greatest resulting HKAF was retained. CAss-based HKAFs that considered the Canadian demography varied between 1.2 (BZ) and 2.8 (1,4-D). The “distinct subpopulation” CAss-based HKAF varied between 1.6 (BZ) and 8.5 (1,4-D). RAM-based HKAFs always remained below 1.6. Overall, this study evaluated for the first time the impact of underlying assumptions with respect to the interindividual variability considered (whole population or each subpopulation taken separately) when determining the HKAF. PMID:22523487

  14. Regular smokeless tobacco use is not a reliable predictor of smoking onset when psychosocial predictors are included in the model.

    PubMed

    O'Connor, Richard J; Flaherty, Brian P; Quinio Edwards, Beth; Kozlowski, Lynn T

    2003-08-01

    Tomar analyzed the CDC's Teenage Attitudes and Practices Survey (TAPS) and reported smokeless tobacco may act as a starter product for or gateway to cigarettes. Regular smokeless tobacco users at baseline were said to be 3.45 times more likely than never users of smokeless tobacco to become cigarette smokers after 4 years (95% CI=1.84-6.47). However, this analysis did not take into account well-known psychosocial predictors of smoking initiation. We reanalyzed TAPS to assess whether including psychosocial predictors of smoking affected the smokeless tobacco gateway effect. Experimenting with smoking, OR=2.09 (95% CI=1.51-2.90); below average school performance, OR=9.32 (95% CI=4.18-20.77); household members smoking, OR=1.49 (95% CI=1.13-1.95); frequent depressive symptoms, OR=2.19 (95% CI=1.25-3.84); fighting, OR=1.48 (95% CI=1.08-2.03); and motorcycle riding, OR=1.42 (95% CI=1.06-1.91) diminished the effect of both regular, OR=1.68 (95% CI=.83-3.41), and never regular smokeless tobacco use, OR=1.41 (95% CI=.96-2.05), to be statistically unreliable. Analyzing results from a sample of true never smokers (never a single puff) showed a similar pattern of results. Our results indicate that complex multivariate models are needed to evaluate recruitment to smoking and single factors that are important in that process. Tomar's analysis should not be used as reliable evidence that smokeless tobacco may be a starter product for cigarettes.

  15. Probing the structural and dynamical properties of liquid water with models including non-local electron correlation.

    PubMed

    Del Ben, Mauro; Hutter, Jürg; VandeVondele, Joost

    2015-08-01

    Water is a ubiquitous liquid that displays a wide range of anomalous properties and has a delicate structure that challenges experiment and simulation alike. The various intermolecular interactions that play an important role, such as repulsion, polarization, hydrogen bonding, and van der Waals interactions, are often difficult to reproduce faithfully in atomistic models. Here, electronic structure theories including all these interactions at equal footing, which requires the inclusion of non-local electron correlation, are used to describe structure and dynamics of bulk liquid water. Isobaric-isothermal (NpT) ensemble simulations based on the Random Phase Approximation (RPA) yield excellent density (0.994 g/ml) and fair radial distribution functions, while various other density functional approximations produce scattered results (0.8-1.2 g/ml). Molecular dynamics simulation in the microcanonical (NVE) ensemble based on Møller-Plesset perturbation theory (MP2) yields dynamical properties in the condensed phase, namely, the infrared spectrum and diffusion constant. At the MP2 and RPA levels of theory, ice is correctly predicted to float on water, resolving one of the anomalies as resulting from a delicate balance between van der Waals and hydrogen bonding interactions. For several properties, obtaining quantitative agreement with experiment requires correction for nuclear quantum effects (NQEs), highlighting their importance, for structure, dynamics, and electronic properties. A computed NQE shift of 0.6 eV for the band gap and absorption spectrum illustrates the latter. Giving access to both structure and dynamics of condensed phase systems, non-local electron correlation will increasingly be used to study systems where weak interactions are of paramount importance. PMID:26254660

  16. Probing the structural and dynamical properties of liquid water with models including non-local electron correlation.

    PubMed

    Del Ben, Mauro; Hutter, Jürg; VandeVondele, Joost

    2015-08-01

    Water is a ubiquitous liquid that displays a wide range of anomalous properties and has a delicate structure that challenges experiment and simulation alike. The various intermolecular interactions that play an important role, such as repulsion, polarization, hydrogen bonding, and van der Waals interactions, are often difficult to reproduce faithfully in atomistic models. Here, electronic structure theories including all these interactions at equal footing, which requires the inclusion of non-local electron correlation, are used to describe structure and dynamics of bulk liquid water. Isobaric-isothermal (NpT) ensemble simulations based on the Random Phase Approximation (RPA) yield excellent density (0.994 g/ml) and fair radial distribution functions, while various other density functional approximations produce scattered results (0.8-1.2 g/ml). Molecular dynamics simulation in the microcanonical (NVE) ensemble based on Møller-Plesset perturbation theory (MP2) yields dynamical properties in the condensed phase, namely, the infrared spectrum and diffusion constant. At the MP2 and RPA levels of theory, ice is correctly predicted to float on water, resolving one of the anomalies as resulting from a delicate balance between van der Waals and hydrogen bonding interactions. For several properties, obtaining quantitative agreement with experiment requires correction for nuclear quantum effects (NQEs), highlighting their importance, for structure, dynamics, and electronic properties. A computed NQE shift of 0.6 eV for the band gap and absorption spectrum illustrates the latter. Giving access to both structure and dynamics of condensed phase systems, non-local electron correlation will increasingly be used to study systems where weak interactions are of paramount importance.

  17. Probing the structural and dynamical properties of liquid water with models including non-local electron correlation

    SciTech Connect

    Del Ben, Mauro Hutter, Jürg; VandeVondele, Joost

    2015-08-07

    Water is a ubiquitous liquid that displays a wide range of anomalous properties and has a delicate structure that challenges experiment and simulation alike. The various intermolecular interactions that play an important role, such as repulsion, polarization, hydrogen bonding, and van der Waals interactions, are often difficult to reproduce faithfully in atomistic models. Here, electronic structure theories including all these interactions at equal footing, which requires the inclusion of non-local electron correlation, are used to describe structure and dynamics of bulk liquid water. Isobaric-isothermal (NpT) ensemble simulations based on the Random Phase Approximation (RPA) yield excellent density (0.994 g/ml) and fair radial distribution functions, while various other density functional approximations produce scattered results (0.8-1.2 g/ml). Molecular dynamics simulation in the microcanonical (NVE) ensemble based on Møller-Plesset perturbation theory (MP2) yields dynamical properties in the condensed phase, namely, the infrared spectrum and diffusion constant. At the MP2 and RPA levels of theory, ice is correctly predicted to float on water, resolving one of the anomalies as resulting from a delicate balance between van der Waals and hydrogen bonding interactions. For several properties, obtaining quantitative agreement with experiment requires correction for nuclear quantum effects (NQEs), highlighting their importance, for structure, dynamics, and electronic properties. A computed NQE shift of 0.6 eV for the band gap and absorption spectrum illustrates the latter. Giving access to both structure and dynamics of condensed phase systems, non-local electron correlation will increasingly be used to study systems where weak interactions are of paramount importance.

  18. ADJUSTABLE DOUBLE PULSE GENERATOR

    DOEpatents

    Gratian, J.W.; Gratian, A.C.

    1961-08-01

    >A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)

  19. Subsea adjustable choke valves

    SciTech Connect

    Cyvas, M.K. )

    1989-08-01

    With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.

  20. Assimilation of surface data in a one-dimensional physical-biogeochemical model of the surface ocean: 2. Adjusting a simple trophic model to chlorophyll, temperature, nitrate, and pCO{sub 2} data

    SciTech Connect

    Prunet, P.; Minster, J.F.; Echevin, V.

    1996-03-01

    This paper builds on a previous work which produced a constrained physical-biogeochemical model of the carbon cycle in the surface ocean. Three issues are addressed: (1) the results of chlorophyll assimilation using a simpler trophic model, (2) adjustment of parameters using the simpler model and data other than surface chlorophyll concentrations, and (3) consistency of the main carbon fluxes derived by the simplified model with values from the more complex model. A one-dimensional vertical model coupling the physics of the ocean mixed layer and a description of biogeochemical processes with a simple trophic model was used to address these issues. Chlorophyll concentration, nitrate concentration, and temperature were used to constrain the model. The surface chlorophyll information was shown to be sufficient to constrain primary production within the photic layer. The simultaneous assimilation of chlorophyll, nitrate, and temperature resulted in a significant improvement of model simulation for the data used. Of the nine biological and physical parameters which resulted in significant variations of the simulated chlorophyll concentration, seven linear combinations of the mode parameters were constrained. The model fit was an improvement on independent surface chlorophyll and nitrate data. This work indicates that a relatively simple biological model is sufficient to describe carbon fluxes. Assimilation of satellite or climatological data coulc be used to adjust the parameters of the model for three-dimensional models. It also suggests that the main carbon fluxes driving the carbon cycle within surface waters could be derived regionally from surface information. 38 refs., 16 figs., 7 tabs.

  1. From skin to bulk: An adjustment technique for assimilation of satellite-derived temperature observations in numerical models of small inland water bodies

    NASA Astrophysics Data System (ADS)

    Javaheri, Amir; Babbar-Sebens, Meghna; Miller, Robert N.

    2016-06-01

    Data Assimilation (DA) has been proposed for multiple water resources studies that require rapid employment of incoming observations to update and improve accuracy of operational prediction models. The usefulness of DA approaches in assimilating water temperature observations from different types of monitoring technologies (e.g., remote sensing and in-situ sensors) into numerical models of in-land water bodies (e.g., lakes and reservoirs) has, however, received limited attention. In contrast to in-situ temperature sensors, remote sensing technologies (e.g., satellites) provide the benefit of collecting measurements with better X-Y spatial coverage. However, assimilating water temperature measurements from satellites can introduce biases in the updated numerical model of water bodies because the physical region represented by these measurements do not directly correspond with the numerical model's representation of the water column. This study proposes a novel approach to address this representation challenge by coupling a skin temperature adjustment technique based on available air and in-situ water temperature observations, with an ensemble Kalman filter based data assimilation technique. Additionally, the proposed approach used in this study for four-dimensional analysis of a reservoir provides reasonably accurate surface layer and water column temperature forecasts, in spite of the use of a fairly small ensemble. Application of the methodology on a test site - Eagle Creek Reservoir - in Central Indiana demonstrated that assimilation of remotely sensed skin temperature data using the proposed approach improved the overall root mean square difference between modeled surface layer temperatures and the adjusted remotely sensed skin temperature observations from 5.6°C to 0.51°C (i.e., 91% improvement). In addition, the overall error in the water column temperature predictions when compared with in-situ observations also decreased from 1.95°C (before assimilation

  2. Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel.

    PubMed

    Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu

    2015-12-11

    In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology.

  3. Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel.

    PubMed

    Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu

    2015-01-01

    In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology. PMID:26690444

  4. Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel †

    PubMed Central

    Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu

    2015-01-01

    In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology. PMID:26690444

  5. Measuring demand for flat water recreation using a two-stage/disequilibrium travel cost model with adjustment for overdispersion and self-selection

    NASA Astrophysics Data System (ADS)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2003-04-01

    An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.

  6. A glacial isostatic adjustment model for the central and northern Laurentide Ice Sheet based on relative sea level and GPS measurements

    NASA Astrophysics Data System (ADS)

    Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.

    2016-06-01

    The thickness and equivalent global sea level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30 per cent decrease and an average ˜20-25 per cent increase to the load thickness relative to the ICE-5G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5G and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea level equivalent compared to ICE-5G at LGM.

  7. Modeling, Simulation, and Control of a Solar Electric Propulsion Vehicle in Near-Earth Vicinity Including Solar Array Degradation

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin (Inventor); Hojnicki, Jeffery (Inventor); Manzella, David (Inventor)

    2016-01-01

    Modeling and control software that integrates the complexities of solar array models, a s