Kleinman, Lawrence C; Norton, Edward C
2009-01-01
Objective To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common. Study Design Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR. Data Collection Data sets produced using Monte Carlo simulations. Principal Findings Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases. Conclusions Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large. PMID:18793213
Carburetion system including an adjustable throttle linkage
Du Bois, C.G.; Falig, J.D.
1986-03-25
A throttle linkage assembly is described comprising a throttle shaft rotatable about a throttle shaft axis between an idle position and a wide open throttle position, a throttle plate fixed on the throttle shaft, a driven lever pivotable about the throttle shaft axis between various angles relative to the throttle plate, and means for fixing the driven lever at a selected angle relative to the throttle plate an adjustment lever fixedly connected to the throttle adjacent the driven lever, and means for releasably securing the driven lever to the adjustment lever.
Kautter, John; Pope, Gregory C.
2004-01-01
The authors document the development of the CMS frailty adjustment model, a Medicare payment approach that adjusts payments to a Medicare managed care organization (MCO) according to the functional impairment of its community-residing enrollees. Beginning in 2004, this approach is being applied to certain organizations, such as Program of All-Inclusive Care for the Elderly (PACE), that specialize in providing care to the community-residing frail elderly. In the future, frailty adjustment could be extended to more Medicare managed care organizations. PMID:25372243
Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed
2010-07-01
Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed. PMID:20604605
Overpaying morbidity adjusters in risk equalization models.
van Kleef, R C; van Vliet, R C J A; van de Ven, W P M M
2016-09-01
Most competitive social health insurance markets include risk equalization to compensate insurers for predictable variation in healthcare expenses. Empirical literature shows that even the most sophisticated risk equalization models-with advanced morbidity adjusters-substantially undercompensate insurers for selected groups of high-risk individuals. In the presence of premium regulation, these undercompensations confront consumers and insurers with incentives for risk selection. An important reason for the undercompensations is that not all information with predictive value regarding healthcare expenses is appropriate for use as a morbidity adjuster. To reduce incentives for selection regarding specific groups we propose overpaying morbidity adjusters that are already included in the risk equalization model. This paper illustrates the idea of overpaying by merging data on morbidity adjusters and healthcare expenses with health survey information, and derives three preconditions for meaningful application. Given these preconditions, we think overpaying may be particularly useful for pharmacy-based cost groups. PMID:26420555
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-06
... Conditions Including Health Care-Acquired Conditions; Final Rule #0;#0;Federal Register / Vol. 76 , No. 108... Adjustment for Provider-Preventable Conditions Including Health Care-Acquired Conditions AGENCY: Centers for... section 2702 of the Patient Protection and Affordable Care Act which directs the Secretary of Health...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-17
... Medicaid Program; Payment Adjustment for Provider-Preventable Conditions Including Health Care-Acquired... amounts expended for providing medical assistance for health care-acquired conditions. It would also... Federal financial participation FY Fiscal year HAC Hospital-acquired condition HCAC Health...
Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG
Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong
2013-01-01
Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252
Bailit, Jennifer L.; Grobman, William A.; Rice, Madeline Murguia; Spong, Catherine Y.; Wapner, Ronald J.; Varner, Michael W.; Thorp, John M.; Leveno, Kenneth J.; Caritis, Steve N.; Shubert, Phillip J.; Tita, Alan T. N.; Saade, George; Sorokin, Yoram; Rouse, Dwight J.; Blackwell, Sean C.; Tolosa, Jorge E.; Van Dorsten, J. Peter
2014-01-01
Objective Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take pre-existing patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for five obstetric outcomes and assess hospital performance across these outcomes. Study Design A cohort study of 115,502 women and their neonates born in 25 hospitals in the United States between March 2008 and February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Results Venous thromboembolism occurred too infrequently (0.03%, 95% CI 0.02% – 0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage 2.29% (95% CI 2.20–2.38), peripartum infection 5.06% (95% CI 4.93–5.19), severe perineal laceration at spontaneous vaginal delivery 2.16% (95% CI 2.06–2.27), neonatal composite 2.73% (95% CI 2.63–2.84)). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Conclusions Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. PMID:23891630
Dynamic stall simulation including turbulence modeling
Allet, A.; Halle, S.; Paraschivoiu, I.
1995-09-01
The objective of this study is to investigate the two-dimensional unsteady flow around an airfoil undergoing a Darrieus motion in dynamic stall conditions. For this purpose, a numerical solver based on the solution of the Reynolds-averaged Navier-Stokes equations expressed in a streamfunction-vorticity formulation in a non-inertial frame of reference was developed. The governing equations are solved by the streamline upwind Petrov-Galerkin finite element method (FEM). Temporal discretization is achieved by second-order-accurate finite differences. The resulting global matrix system is linearized by the Newton method and solved by the generalized minimum residual method (GMRES) with an incomplete triangular factorization preconditioning (ILU). Turbulence effects are introduced in the solver by an eddy viscosity model. The investigation centers on an evaluation of the possibilities of several turbulence models, including the algebraic Cebeci-Smith model (CSM) and the nonequilibrium Johnson-King model (JKM). In an effort to predict dynamic stall features on rotating airfoils, first the authors present some testing results concerning the performance of both turbulence models for the flat plate case. Then, computed flow structure together with aerodynamic coefficients for a NACA 0015 airfoil in Darrieus motion under stall conditions are presented.
An interface model for dosage adjustment connects hematotoxicity to pharmacokinetics.
Meille, C; Iliadis, A; Barbolosi, D; Frances, N; Freyer, G
2008-12-01
When modeling is required to describe pharmacokinetics and pharmacodynamics simultaneously, it is difficult to link time-concentration profiles and drug effects. When patients are under chemotherapy, despite the huge amount of blood monitoring numerations, there is a lack of exposure variables to describe hematotoxicity linked with the circulating drug blood levels. We developed an interface model that transforms circulating pharmacokinetic concentrations to adequate exposures, destined to be inputs of the pharmacodynamic process. The model is materialized by a nonlinear differential equation involving three parameters. The relevance of the interface model for dosage adjustment is illustrated by numerous simulations. In particular, the interface model is incorporated into a complex system including pharmacokinetics and neutropenia induced by docetaxel and by cisplatin. Emphasis is placed on the sensitivity of neutropenia with respect to the variations of the drug amount. This complex system including pharmacokinetic, interface, and pharmacodynamic hematotoxicity models is an interesting tool for analysis of hematotoxicity induced by anticancer agents. The model could be a new basis for further improvements aimed at incorporating new experimental features. PMID:19107581
SEEPAGE MODEL FOR PA INCLUDING DRIFT COLLAPSE
C. Tsang
2004-09-22
The purpose of this report is to document the predictions and analyses performed using the seepage model for performance assessment (SMPA) for both the Topopah Spring middle nonlithophysal (Tptpmn) and lower lithophysal (Tptpll) lithostratigraphic units at Yucca Mountain, Nevada. Look-up tables of seepage flow rates into a drift (and their uncertainty) are generated by performing numerical simulations with the seepage model for many combinations of the three most important seepage-relevant parameters: the fracture permeability, the capillary-strength parameter 1/a, and the percolation flux. The percolation flux values chosen take into account flow focusing effects, which are evaluated based on a flow-focusing model. Moreover, multiple realizations of the underlying stochastic permeability field are conducted. Selected sensitivity studies are performed, including the effects of an alternative drift geometry representing a partially collapsed drift from an independent drift-degradation analysis (BSC 2004 [DIRS 166107]). The intended purpose of the seepage model is to provide results of drift-scale seepage rates under a series of parameters and scenarios in support of the Total System Performance Assessment for License Application (TSPA-LA). The SMPA is intended for the evaluation of drift-scale seepage rates under the full range of parameter values for three parameters found to be key (fracture permeability, the van Genuchten 1/a parameter, and percolation flux) and drift degradation shape scenarios in support of the TSPA-LA during the period of compliance for postclosure performance [Technical Work Plan for: Performance Assessment Unsaturated Zone (BSC 2002 [DIRS 160819], Section I-4-2-1)]. The flow-focusing model in the Topopah Spring welded (TSw) unit is intended to provide an estimate of flow focusing factors (FFFs) that (1) bridge the gap between the mountain-scale and drift-scale models, and (2) account for variability in local percolation flux due to
An Integrated Biochemistry Laboratory, Including Molecular Modeling
NASA Astrophysics Data System (ADS)
Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.
1996-11-01
) experience with methods of protein purification; (iii) incorporation of appropriate controls into experiments; (iv) use of basic statistics in data analysis; (v) writing papers and grant proposals in accepted scientific style; (vi) peer review; (vii) oral presentation of results and proposals; and (viii) introduction to molecular modeling. Figure 1 illustrates the modular nature of the lab curriculum. Elements from each of the exercises can be separated and treated as stand-alone exercises, or combined into short or long projects. We have been able to offer the opportunity to use sophisticated molecular modeling in the final module through funding from an NSF-ILI grant. However, many of the benefits of the research proposal can be achieved with other computer programs, or even by literature survey alone. Figure 1.Design of project-based biochemistry laboratory. Modules (projects, or portions of projects) are indicated as boxes. Each of these can be treated independently, or used as part of a larger project. Solid lines indicate some suggested paths from one module to the next. The skills and knowledge required for protein purification and design are developed in three units: (i) an introduction to critical assays needed to monitor degree of purification, including an evaluation of assay parameters; (ii) partial purification by ion-exchange techniques; and (iii) preparation of a grant proposal on protein design by mutagenesis. Brief descriptions of each of these units follow, with experimental details of each project at the end of this paper. Assays for Lysozyme Activity and Protein Concentration (4 weeks) The assays mastered during the first unit are a necessary tool for determining the purity of the enzyme during the second unit on purification by ion exchange. These assays allow an introduction to the concept of specific activity (units of enzyme activity per milligram of total protein) as a measure of purity. In this first sequence, students learn a turbidimetric assay
An Integrated Biochemistry Laboratory, Including Molecular Modeling
NASA Astrophysics Data System (ADS)
Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.
1996-11-01
) experience with methods of protein purification; (iii) incorporation of appropriate controls into experiments; (iv) use of basic statistics in data analysis; (v) writing papers and grant proposals in accepted scientific style; (vi) peer review; (vii) oral presentation of results and proposals; and (viii) introduction to molecular modeling. Figure 1 illustrates the modular nature of the lab curriculum. Elements from each of the exercises can be separated and treated as stand-alone exercises, or combined into short or long projects. We have been able to offer the opportunity to use sophisticated molecular modeling in the final module through funding from an NSF-ILI grant. However, many of the benefits of the research proposal can be achieved with other computer programs, or even by literature survey alone. Figure 1.Design of project-based biochemistry laboratory. Modules (projects, or portions of projects) are indicated as boxes. Each of these can be treated independently, or used as part of a larger project. Solid lines indicate some suggested paths from one module to the next. The skills and knowledge required for protein purification and design are developed in three units: (i) an introduction to critical assays needed to monitor degree of purification, including an evaluation of assay parameters; (ii) partial purification by ion-exchange techniques; and (iii) preparation of a grant proposal on protein design by mutagenesis. Brief descriptions of each of these units follow, with experimental details of each project at the end of this paper. Assays for Lysozyme Activity and Protein Concentration (4 weeks) The assays mastered during the first unit are a necessary tool for determining the purity of the enzyme during the second unit on purification by ion exchange. These assays allow an introduction to the concept of specific activity (units of enzyme activity per milligram of total protein) as a measure of purity. In this first sequence, students learn a turbidimetric assay
Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)
The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...
Unified Model for Academic Competence, Social Adjustment, and Psychopathology.
ERIC Educational Resources Information Center
Schaefer, Earl S.; And Others
A unified conceptual model is needed to integrate the extensive research on (1) social competence and adaptive behavior, (2) converging conceptualizations of social adjustment and psychopathology, and (3) emerging concepts and measures of academic competence. To develop such a model, a study was conducted in which teacher ratings were collected on…
Seepage Model for PA Including Dift Collapse
G. Li; C. Tsang
2000-12-20
The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to
Including eddies in global ocean models
NASA Astrophysics Data System (ADS)
Semtner, Albert J.; Chervin, Robert M.
The ocean is a turbulent fluid that is driven by winds and by surface exchanges of heat and moisture. It is as important as the atmosphere in governing climate through heat distribution, but so little is known about the ocean that it remains a “final frontier” on the face of the Earth. Many ocean currents are truly global in extent, such as the Antarctic Circumpolar Current and the “conveyor belt” that connects the North Atlantic and North Pacific oceans by flows around the southern tips of Africa and South America. It has long been a dream of some oceanographers to supplement the very limited observational knowledge by reconstructing the currents of the world ocean from the first principles of physics on a computer. However, until very recently, the prospect of doing this was thwarted by the fact that fluctuating currents known as “mesoscale eddies” could not be explicitly included in the calculation.
Using Bibliotherapy to Help Children Adjust to Changing Role Models.
ERIC Educational Resources Information Center
Pardeck, John T.; Pardeck, Jean A.
One technique for helping children adjust to changing role models is bibliotherapy--the use of children's books to facilitate identification with and exploration of sex role behavior. Confronted with change in various social systems, particularly the family, children are faced with conflicts concerning their sex role development. The process…
Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.
ERIC Educational Resources Information Center
Parker, Randall M.; Schaller, James; Hansmann, Sandra
2003-01-01
Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…
Seven challenges for metapopulation models of epidemics, including households models.
Ball, Frank; Britton, Tom; House, Thomas; Isham, Valerie; Mollison, Denis; Pellis, Lorenzo; Scalia Tomba, Gianpaolo
2015-03-01
This paper considers metapopulation models in the general sense, i.e. where the population is partitioned into sub-populations (groups, patches,...), irrespective of the biological interpretation they have, e.g. spatially segregated large sub-populations, small households or hosts themselves modelled as populations of pathogens. This framework has traditionally provided an attractive approach to incorporating more realistic contact structure into epidemic models, since it often preserves analytic tractability (in stochastic as well as deterministic models) but also captures the most salient structural inhomogeneity in contact patterns in many applied contexts. Despite the progress that has been made in both the theory and application of such metapopulation models, we present here several major challenges that remain for future work, focusing on models that, in contrast to agent-based ones, are amenable to mathematical analysis. The challenges range from clarifying the usefulness of systems of weakly-coupled large sub-populations in modelling the spread of specific diseases to developing a theory for endemic models with household structure. They include also developing inferential methods for data on the emerging phase of epidemics, extending metapopulation models to more complex forms of human social structure, developing metapopulation models to reflect spatial population structure, developing computationally efficient methods for calculating key epidemiological model quantities, and integrating within- and between-host dynamics in models. PMID:25843386
ERIC Educational Resources Information Center
Pakenham, Kenneth I.; Samios, Christina; Sofronoff, Kate
2005-01-01
The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between…
Milly, P.C.D.; Dunne, K.A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.
Milly, Paul C.; Dunne, Krista A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.
Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia
2014-01-01
Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387
Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia
2014-01-01
Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387
Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species
Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...
Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina
ERIC Educational Resources Information Center
Peek, Lori; Morrissey, Bridget; Marlatt, Holly
2011-01-01
The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…
ERIC Educational Resources Information Center
Tay, Louis; Drasgow, Fritz
2012-01-01
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Dynamic hysteresis modeling including skin effect using diffusion equation model
NASA Astrophysics Data System (ADS)
Hamada, Souad; Louai, Fatima Zohra; Nait-Said, Nasreddine; Benabou, Abdelkader
2016-07-01
An improved dynamic hysteresis model is proposed for the prediction of hysteresis loop of electrical steel up to mean frequencies, taking into account the skin effect. In previous works, the analytical solution of the diffusion equation for low frequency (DELF) was coupled with the inverse static Jiles-Atherton (JA) model in order to represent the hysteresis behavior for a lamination. In the present paper, this approach is improved to ensure the reproducibility of measured hysteresis loops at mean frequency. The results of simulation are compared with the experimental ones. The selected results for frequencies 50 Hz, 100 Hz, 200 Hz and 400 Hz are presented and discussed.
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
A model for heterogeneous materials including phase transformations
Addessio, F.L.; Clements, B.E.; Williams, T.O.
2005-04-15
A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.
Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar
2016-08-15
Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025
Modeling heart rate variability including the effect of sleep stages
NASA Astrophysics Data System (ADS)
Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan
2016-02-01
We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.
Modeling heart rate variability including the effect of sleep stages.
Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan
2016-02-01
We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that-in comparison with real data-the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed. PMID:26931582
Development of a charge adjustment model for cardiac catheterization.
Brennan, Andrew; Gauvreau, Kimberlee; Connor, Jean; O'Connell, Cheryl; David, Sthuthi; Almodovar, Melvin; DiNardo, James; Banka, Puja; Mayer, John E; Marshall, Audrey C; Bergersen, Lisa
2015-02-01
A methodology that would allow for comparison of charges across institutions has not been developed for catheterization in congenital heart disease. A single institution catheterization database with prospectively collected case characteristics was linked to hospital charges related and limited to an episode of care in the catheterization laboratory for fiscal years 2008-2010. Catheterization charge categories (CCC) were developed to group types of catheterization procedures using a combination of empiric data and expert consensus. A multivariable model with outcome charges was created using CCC and additional patient and procedural characteristics. In 3 fiscal years, 3,839 cases were available for analysis. Forty catheterization procedure types were categorized into 7 CCC yielding a grouper variable with an R (2) explanatory value of 72.6%. In the final CCC, the largest proportion of cases was in CCC 2 (34%), which included diagnostic cases without intervention. Biopsy cases were isolated in CCC 1 (12%), and percutaneous pulmonary valve placement alone made up CCC 7 (2%). The final model included CCC, number of interventions, and cardiac diagnosis (R (2) = 74.2%). Additionally, current financial metrics such as APR-DRG severity of illness and case mix index demonstrated a lack of correlation with CCC. We have developed a catheterization procedure type financial grouper that accounts for the diverse case population encountered in catheterization for congenital heart disease. CCC and our multivariable model could be used to understand financial characteristics of a population at a single point in time, longitudinally, and to compare populations. PMID:25113520
A coke oven model including thermal decomposition kinetics of tar
Munekane, Fuminori; Yamaguchi, Yukio; Tanioka, Seiichi
1997-12-31
A new one-dimensional coke oven model has been developed for simulating the amount and the characteristics of by-products such as tar and gas as well as coke. This model consists of both heat transfer and chemical kinetics including thermal decomposition of coal and tar. The chemical kinetics constants are obtained by estimation based on the results of experiments conducted to investigate the thermal decomposition of both coal and tar. The calculation results using the new model are in good agreement with experimental ones.
NASA Astrophysics Data System (ADS)
Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.
2015-11-01
The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas
A sonic boom propagation model including mean flow atmospheric effects
NASA Astrophysics Data System (ADS)
Salamone, Joe; Sparrow, Victor W.
2012-09-01
This paper presents a time domain formulation of nonlinear lossy propagation in onedimension that also includes the effects of non-collinear mean flow in the acoustic medium. The model equation utilized is an augmented Burgers equation that includes the effects of nonlinearity, geometric spreading, atmospheric stratification, and also absorption and dispersion due to thermoviscous and molecular relaxation effects. All elements of the propagation are implemented in the time domain and the effects of non-collinear mean flow are accounted for in each term of the model equation. Previous authors have presented methods limited to showing the effects of wind on ray tracing and/or using an effective speed of sound in their model equation. The present work includes the effects of mean flow for all terms included in the augmented Burgers equation with all of the calculations performed in the time-domain. The capability to include the effects of mean flow in the acoustic medium allows one to make predictions more representative of real-world atmospheric conditions. Examples are presented for nonlinear propagation of N-waves and shaped sonic booms. [Work supported by Gulfstream Aerospace Corporation.
Models of Spectral Galaxy Evolution including the effects of Dust
NASA Astrophysics Data System (ADS)
Möller, C. S.; Fritze-v. Alvensleben, U.; Fricke, K. J.
To analyse the effects of dust to the UV emission in various galaxy types we present our evolutionary synthesis models which includes dust absorption in a chemically consistent way. The time and redshift evolution of the extinction is based on the evolution of the gas content and metallicity. Comparing our model SED's with templates from Kennicutt's and Kinney et al.'s atlas we show the detailed agreement with integrated spectra of galaxies and point out the importance of aperture effects. We are able to predict the UV fluxes for different galaxy types. Combined with a cosmological model we show the differences in the evolutionary and k-corrections comparing models with and without dust.
Estimation of nonlinear pilot model parameters including time delay.
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.; Wells, W. R.
1972-01-01
Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.
Comprehensive modeling of electrostatically actuated MEMS beams including uncertainty quantification
NASA Astrophysics Data System (ADS)
Snow, Michael G.
MEMS switches have offered dramatic improvements in the performance of RF systems. However, difficulties with reliability has slowed the adoption of MEMS switches in RF systems. These reliability issues are partly due to the poor manufacturing tolerances endemic to MEMS manufacturing processes. These manufacturing tolerances may cause significant variations in performance characteristics. This work focuses on electrostatically actuated MEMS beam capacitive shunt switches. A non-linear dynamic model for these switches was developed. The model accounts for a variety of physical effects including; beam stretching, residual stress, non-rigid boundary conditions, initial curvature, electrostatic fringing field, finite electrodes, squeeze film damping, and distributed contact. The effects of uncertain parameters on the outputs of the model are discovered through response surface based uncertainty quantification techniques. The model accurately predicts the actuation voltages and switching times of these MEMS switches as well as the effects of uncertain parameters. The derived model is widely applicable and accuratly reproduces the results of other models in the literature. Future researchers will be able to rapidly iterate designs and accurately understand the behavior of these switches.
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Hanford, Amanda D.; Shepherd, Micah R.; Campbell, Robert L.; Smith, Edward C.
2010-01-01
A computational approach for simulating the effects of rolling element and journal bearings on the vibration and sound transmission through gearboxes has been demonstrated. The approach, using ARL/Penn State s CHAMP methodology, uses Component Mode Synthesis of housing and shafting modes computed using Finite Element (FE) models to allow for rapid adjustment of bearing impedances in gearbox models. The approach has been demonstrated on NASA GRC s test gearbox with three different bearing configurations: in the first condition, traditional rolling element (ball and roller) bearings were installed, and in the second and third conditions, the traditional bearings were replaced with journal and wave bearings (wave bearings are journal bearings with a multi-lobed wave pattern on the bearing surface). A methodology for computing the stiffnesses and damping in journal and wave bearings has been presented, and demonstrated for the journal and wave bearings used in the NASA GRC test gearbox. The FE model of the gearbox, along with the rolling element bearing coupling impedances, was analyzed to compute dynamic transfer functions between forces applied to the meshing gears and accelerations on the gearbox housing, including several locations near the bearings. A Boundary Element (BE) acoustic model was used to compute the sound radiated by the gearbox. Measurements of the Gear Mesh Frequency (GMF) tones were made by NASA GRC at several operational speeds for the rolling element and journal bearing gearbox configurations. Both the measurements and the CHAMP numerical model indicate that the journal bearings reduce vibration and noise for the second harmonic of the gear meshing tones, but show no clear benefit to using journal bearings to reduce the amplitudes of the fundamental gear meshing tones. Also, the numerical model shows that the gearbox vibrations and radiated sound are similar for journal and wave bearing configurations.
A model of Barchan dunes including lateral shear stress.
Schwämmle, V; Herrmann, H J
2005-01-01
Barchan dunes are found where sand availability is low and wind direction quite constant. The two dimensional shear stress of the wind field and the sand movement by saltation and avalanches over a barchan dune are simulated. The model with one dimensional shear stress is extended including surface diffusion and lateral shear stress. The resulting final shape is compared to the results of the model with a one dimensional shear stress and confirmed by comparison to measurements. We found agreement and improvements with respect to the model with one dimensional shear stress. Additionally, a characteristic edge at the center of the windward side is discovered which is also observed for big barchans. Diffusion effects reduce this effect for small dunes. PMID:15688141
Punamäki, R L; Qouta, S; el Sarraj, E
1997-08-01
The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting. PMID:9306648
A comparison of models for supernova remnants including cosmic rays
NASA Astrophysics Data System (ADS)
Kang, Hyesung; Drury, L. O'C.
1992-11-01
A simplified model which can follow the dynamical evolution of a supernova remnant including the acceleration of cosmic rays without carrying out full numerical simulations has been proposed by Drury, Markiewicz, & Voelk in 1989. To explore the accuracy and the merits of using such a model, we have recalculated with the simplified code the evolution of the supernova remnants considered in Jones & Kang, in which more detailed and accurate numerical simulations were done using a full hydrodynamic code based on the two-fluid approximation. For the total energy transferred to cosmic rays the two codes are in good agreement, the acceleration efficiency being the same within a factor of 2 or so. The dependence of the results of the two codes on the closure parameters for the two-fluid approximation is also qualitatively similar. The agreement is somewhat degraded in those cases where the shock is smoothed out by the cosmic rays.
A Prediction Model for Chronic Kidney Disease Includes Periodontal Disease
Fisher, Monica A.; Taylor, George W.
2009-01-01
Background An estimated 75% of the seven million Americans with moderate-to-severe chronic kidney disease are undiagnosed. Improved prediction models to identify high-risk subgroups for chronic kidney disease enhance the ability of health care providers to prevent or delay serious sequelae, including kidney failure, cardiovascular disease, and premature death. Methods We identified 11,955 adults ≥18 years of age in the Third National Health and Nutrition Examination Survey. Chronic kidney disease was defined as an estimated glomerular filtration rate of 15 to 59 ml/minute/1.73 m2. High-risk subgroups for chronic kidney disease were identified by estimating the individual probability using β coefficients from the model of traditional and non-traditional risk factors. To evaluate this model, we performed standard diagnostic analyses of sensitivity, specificity, positive predictive value, and negative predictive value using 5%, 10%, 15%, and 20% probability cutoff points. Results The estimated probability of chronic kidney disease ranged from virtually no probability (0%) for an individual with none of the 12 risk factors to very high probability (98%) for an older, non-Hispanic white edentulous former smoker, with diabetes ≥10 years, hypertension, macroalbuminuria, high cholesterol, low high-density lipoprotein, high C-reactive protein, lower income, and who was hospitalized in the past year. Evaluation of this model using an estimated 5% probability cutoff point resulted in 86% sensitivity, 85% specificity, 18% positive predictive value, and 99% negative predictive value. Conclusion This United States population–based study suggested the importance of considering multiple risk factors, including periodontal status, because this improves the identification of individuals at high risk for chronic kidney disease and may ultimately reduce its burden. PMID:19228085
NASA Astrophysics Data System (ADS)
Stravs, L.; Brilly, M.
2009-04-01
Good and accurate long-term low flow forecasting is important in the fields of sustainable water management, water rights, water supply management, industrial use of freshwater, optimization of the reservoir operations for the production of electric energy and other water-related disciplines. Today, low flow forecasting is usually performed as an integrated part of calibrated rainfall-runoff models, but in our research we developed two types of simple empirical 7-day ahead low flow forecasting models by using the M5 machine learning method for the generation of regression and model trees. Development of the first type of models was based solely on the application of the M5 machine learning method (1-, 2-, 3-, 4-, 5-, 6-and 7-day lead time low flow forecasting model trees were developed from using only past flow data and then combined to produce 7-day ahead forecast curve), while the development of the other type of models included the conceptual knowledge of linear reservoir recession functions AND application of the M5 machine learning method (we modelled the streamflow recession coefficient k as a function of the flow rate at which the 7-day low flow forecast is made and the decrease in the flow rate from the previous day). Both types of 7-day ahead low flow forecasting models were developed by using the same type and amount of data and were built for the Podhom gauging station on the Radovna River and the Medvode gauging station on the Sora River (both are Slovenian tributaries of the Sava River, which itself is a Danube River tributary). The results were compared and tested both visually and numerically.
Kinetic models of gene expression including non-coding RNAs
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2011-03-01
In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.
Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data
Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.
2009-01-01
Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053
Progress Towards an LES Wall Model Including Unresolved Roughness
NASA Astrophysics Data System (ADS)
Craft, Kyle; Redman, Andrew; Aikens, Kurt
2015-11-01
Wall models used in large eddy simulations (LES) are often based on theories for hydraulically smooth walls. While this is reasonable for many applications, there are also many where the impact of surface roughness is important. A previously developed wall model has been used primarily for jet engine aeroacoustics. However, jet simulations have not accurately captured thick initial shear layers found in some experimental data. This may partly be due to nozzle wall roughness used in the experiments to promote turbulent boundary layers. As a result, the wall model is extended to include the effects of unresolved wall roughness through appropriate alterations to the log-law. The methodology is tested for incompressible flat plate boundary layers with different surface roughness. Correct trends are noted for the impact of surface roughness on the velocity profile. However, velocity deficit profiles and the Reynolds stresses do not collapse as well as expected. Possible reasons for the discrepancies as well as future work will be presented. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
Development of an Aeroelastic Analysis Including a Viscous Flow Model
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Bakhle, Milind A.
2001-01-01
Under this grant, Version 4 of the three-dimensional Navier-Stokes aeroelastic code (TURBO-AE) has been developed and verified. The TURBO-AE Version 4 aeroelastic code allows flutter calculations for a fan, compressor, or turbine blade row. This code models a vibrating three-dimensional bladed disk configuration and the associated unsteady flow (including shocks, and viscous effects) to calculate the aeroelastic instability using a work-per-cycle approach. Phase-lagged (time-shift) periodic boundary conditions are used to model the phase lag between adjacent vibrating blades. The direct-store approach is used for this purpose to reduce the computational domain to a single interblade passage. A disk storage option, implemented using direct access files, is available to reduce the large memory requirements of the direct-store approach. Other researchers have implemented 3D inlet/exit boundary conditions based on eigen-analysis. Appendix A: Aeroelastic calculations based on three-dimensional euler analysis. Appendix B: Unsteady aerodynamic modeling of blade vibration using the turbo-V3.1 code.
Polarimetric Models of Circumstellar Discs Including Aggregate Dust Grains
NASA Astrophysics Data System (ADS)
Mohan, Mahesh
output files and to apply a size distribution to the data. The second circumstellar disc investigated is the debris disc of the M dwarf star AU Mic. The disc was modelled, using the radiative transfer code Hyperion, based on F606W (HST) and JHK0-band (Keck II) scattered light observations and F606Wband polarized light observations. Initially, the disc is modelled as a two component structure using two grain types: compact silicate grains and porous dirty ice water. Both models are able to reproduce the observed SED and the F606W and H-band surface brightness profiles, but are unable to fit the observed F606W degree of polarization. Therefore, a more complex/realistic grain model was examined (ballistic aggregate particles). In addition, recent millimetre observations suggest the existence of a planetesimal belt < 3 AU from the central star. This belt is included in the BAM2 model and was successful in fitting the observed SED, F606W and H-band surface brightness and F606W polarization. These results demonstrate the limitations of spherical grain models and indicate the importance of modelling more realistic dust grains.
A model for including thermal conduction in molecular dynamics simulations
NASA Technical Reports Server (NTRS)
Wu, Yue; Friauf, Robert J.
1989-01-01
A technique is introduced for including thermal conduction in molecular dynamics simulations for solids. A model is developed to allow energy flow between the computational cell and the bulk of the solid when periodic boundary conditions cannot be used. Thermal conduction is achieved by scaling the velocities of atoms in a transitional boundary layer. The scaling factor is obtained from the thermal diffusivity, and the results show good agreement with the solution for a continuous medium at long times. The effects of different temperature and size of the system, and of variations in strength parameter, atomic mass, and thermal diffusivity were investigated. In all cases, no significant change in simulation results has been found.
Configuration based Collisional-Radiative Model including configuration interaction
NASA Astrophysics Data System (ADS)
Busquet, Michel
2007-11-01
Atomic levels mixing through Configuration Interaction (CI) yields important effects. It transfers oscillator strengthes from allowed lines to forbidden lines, and produces strong shift and broadening of line arrays, although the total emissivity is almost insensitive to CI, being proportional to the average wave number. However for hi Z material, like Xe or Sn (potential xuv-ray source for micro-lithography), a non-LTE calculation accounting for all relevant levels wiill be untractable with billions of states. The model we constructed, CAVCRM (caf'e-crème), is a non-LTE C.R.M. where states are configurations but it includes C.I. to give full richness of spectral quantities, using the latest version of the HULLAC-v9 suite of codes and our newly developped algorithm for large set of states with as many as 50,000 states [1]. [1] M.Klapisch et al, this conference
Modeling potentiometric measurements in topological insulators including parallel channels
NASA Astrophysics Data System (ADS)
Hong, Seokmin; Diep, Vinh; Datta, Supriyo; Chen, Yong P.
2012-08-01
The discovery of spin-polarized states at the surface of three-dimensional topological insulators (TI) like Bi2Te3 and Bi2Se3 motivates intense interests in possible electrical measurements demonstrating unique signatures of these unusual states. Here we show that a three-terminal potentiometric set-up can be used to probe them by measuring the voltage change of a detecting magnet upon reversing its magnetization. We present numerical results using a nonequilibrium Green's function (NEGF)-based model to show the corresponding signal quantitatively in various transport regimes. We then provide an analytical expression for the resistance (the measured voltage difference divided by an applied current) that agrees with NEGF results well in both ballistic and diffusive limits. This expression is applicable to TI surface states, two-dimensional electrons with Rashba spin-split bands, and any combination of multiple channels, including bulk parallel states in TI, which makes it useful in analyzing experimental results.
A New Climate Adjustment Tool: An update to EPA’s Storm Water Management Model
The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations.
A new solar cycle model including meridional circulation
NASA Technical Reports Server (NTRS)
Wang, Y.-M.; Sheeley, N. R., Jr.; Nash, A. G.
1991-01-01
A kinematic model is presented for the solar cycle which includes not only the transport of magnetic flux by supergranular diffusion and a poleward bulk flow at the sun's surface, but also the effects of turbulent diffusion and an equatorward 'return flow' beneath the surface. As in the earlier models of Babcock and Leighton, the rotational shearing of a subsurface poloidal field generates toroidal flux that erupts at the surface in the form of bipolar magnetic regions. However, such eruptions do not result in any net loss of toroidal flux from the sun (as assumed by Babcock and Leighton); instead, the large-scale toroidal field is destroyed both by 'unwinding' as the local poloidal field reverses its polarity, and by diffusion as the toroidal flux is transported equatorward by the subsurface flow and merged with its opposite hemisphere counterpart. The inclusion of meridional circulation allows stable oscillations of the magnetic field, accompanied by the equatorward progression of flux eruptions, to be achieved even in the absence of a radial gradient in the angular velocity. An illustrative case in which a subsurface flow speed of order 1 m/s and subsurface diffusion rate of order 10 sq km/s yield 22-yr oscillations in qualitative agreement with observations.
Modeling fluvial incision and transient landscape evolution: Influence of dynamic channel adjustment
NASA Astrophysics Data System (ADS)
Attal, M.; Tucker, G. E.; Whittaker, A. C.; Cowie, P. A.; Roberts, G. P.
2008-09-01
Channel geometry exerts a fundamental control on fluvial processes. Recent work has shown that bedrock channel width depends on a number of parameters, including channel slope, and is not solely a function of drainage area as is commonly assumed. The present work represents the first attempt to investigate the consequences of dynamic, gradient-sensitive channel adjustment for drainage-basin evolution. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to analyze the response of a catchment to a given tectonic perturbation, using, as a template, the topography of a well-documented catchment in the footwall of an active normal fault in the Apennines (Italy) that is known to be undergoing a transient response to tectonic forcing. We show that the observed transient response can be reproduced to first order with a simple detachment-limited fluvial incision law. Transient landscape is characterized by gentler gradients and a shorter response time when dynamic channel adjustment is allowed. The differences in predicted channel geometry between the static case (width dependent solely on upstream area) and dynamic case (width dependent on both drainage area and channel slope) lead to contrasting landscape morphologies when integrated at the scale of a whole catchment, particularly in presence of strong tilting and/or pronounced slip-rate acceleration. Our results emphasize the importance of channel width in controlling fluvial processes and landscape evolution. They stress the need for using a dynamic hydraulic scaling law when modeling landscape evolution, particularly when the relative uplift field is nonuniform.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, Anne B.; Lizarraga, Joy S.
1996-01-01
Statistical operations termed model-adjustment procedures can be used to incorporate local data into existing regression modes to improve the predication of urban-runoff quality. Each procedure is a form of regression analysis in which the local data base is used as a calibration data set; the resulting adjusted regression models can then be used to predict storm-runoff quality at unmonitored sites. Statistical tests of the calibration data set guide selection among proposed procedures.
Goldilocks models of higher-dimensional inflation (including modulus stabilization)
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Enns, Jared J. H.; Hayman, Peter; Patil, Subodh P.
2016-08-01
We explore the mechanics of inflation within simplified extra-dimensional models involving an inflaton interacting with the Einstein-Maxwell system in two extra dimensions. The models are Goldilocks-like inasmuch as they are just complicated enough to include a mechanism to stabilize the extra-dimensional size (or modulus), yet simple enough to solve explicitly the full extra-dimensional field equations using only simple tools. The solutions are not restricted to the effective 4D regime with H ll mKK (the latter referring to the characteristic mass splitting of the Kaluza-Klein excitations) because the full extra-dimensional Einstein equations are solved. This allows an exploration of inflationary physics in a controlled calculational regime away from the usual four-dimensional lamp-post. The inclusion of modulus stabilization is important because experience with string models teaches that this is usually what makes models fail: stabilization energies easily dominate the shallow potentials required by slow roll and so open up directions to evolve that are steeper than those of the putative inflationary direction. We explore (numerically and analytically) three representative kinds of inflationary scenarios within this simple setup. In one the radion is trapped in an inflaton-dependent local minimum whose non-zero energy drives inflation. Inflation ends as this energy relaxes to zero when the inflaton finds its own minimum. The other two involve power-law scaling solutions during inflation. One of these is a dynamical attractor whose features are relatively insensitive to initial conditions but whose slow-roll parameters cannot be arbitrarily small; the other is not an attractor but can roll much more slowly, until eventually transitioning to the attractor. The scaling solutions can satisfy H > mKK, but when they do standard 4D fluctuation calculations need not apply. When in a 4D regime the solutions predict η simeq 0 and so r simeq 0.11 when ns simeq 0.96 and so
Modeling of an Adjustable Beam Solid State Light Project
NASA Technical Reports Server (NTRS)
Clark, Toni
2015-01-01
This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.
Energy loss in a partonic transport model including bremsstrahlung processes
Fochler, Oliver; Greiner, Carsten; Xu Zhe
2010-08-15
A detailed investigation of the energy loss of gluons that traverse a thermal gluonic medium simulated within the perturbative QCD-based transport model BAMPS (a Boltzmann approach to multiparton scatterings) is presented in the first part of this work. For simplicity the medium response is neglected in these calculations. The energy loss from purely elastic interactions is compared with the case where radiative processes are consistently included based on the matrix element by Gunion and Bertsch. From this comparison, gluon multiplication processes gg{yields}ggg are found to be the dominant source of energy loss within the approach employed here. The consequences for the quenching of gluons with high transverse momentum in fully dynamic simulations of Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energy of {radical}(s)=200A GeV are discussed in the second major part of this work. The results for central collisions as discussed in a previous publication are revisited, and first results on the nuclear modification factor R{sub AA} for noncentral Au+Au collisions are presented. They show a decreased quenching compared to central collisions while retaining the same shape. The investigation of the elliptic flow v{sub 2} is extended up to nonthermal transverse momenta of 10 GeV, exhibiting a maximum v{sub 2} at roughly 4 to 5 GeV and a subsequent decrease. Finally the sensitivity of the aforementioned results on the specific implementation of the effective modeling of the Landau-Pomeranchuk-Migdal (LPM) effect via a formation-time-based cutoff is explored.
Circumplex and Spherical Models for Child School Adjustment and Competence.
ERIC Educational Resources Information Center
Schaefer, Earl S.; Edgerton, Marianna
The goal of this study is to broaden the scope of a conceptual model for child behavior by analyzing constructs relevant to cognition, conation, and affect. Two samples were drawn from school populations. For the first sample, 28 teachers from 8 rural, suburban, and urban schools rated 193 kindergarten children. Each teacher rated up to eight…
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for
A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon
2016-05-01
We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.
Comparison of the Properties of Regression and Categorical Risk-Adjustment Models
Averill, Richard F.; Muldoon, John H.; Hughes, John S.
2016-01-01
Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302
ERIC Educational Resources Information Center
Olejnik, Stephen; Mills, Jamie; Keselman, Harvey
2000-01-01
Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…
Evaluation of annual, global seismicity forecasts, including ensemble models
NASA Astrophysics Data System (ADS)
Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner
2013-04-01
In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.
Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.
2014-12-01
This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.
Block adjustment of Chang'E-1 images based on rational function model
NASA Astrophysics Data System (ADS)
Liu, Bin; Liu, Yiliang; Di, Kaichang; Sun, Xiliang
2014-05-01
Chang'E-1(CE-1) is the first lunar orbiter of China's lunar exploration program. The CCD camera carried by CE-1 has acquired stereo images covering the entire lunar surface. Block adjustment and 3D mapping using CE-1 images are of great importance for morphological and other scientific research of the Moon. Traditional block adjustment based on rigorous sensor model is complicated due to a large number of parameters and possible correlations among them. To tackle this problem, this paper presents a block adjustment method using Rational Function Model (RFM). The RFM parameters are generated based on rigorous sensor model using virtual grid of control points. Afterwards, the RFM based block adjustment solves the refinement parameters through a least squares solution. Experimental results using CE-1 images located in Sinus Irdium show that the RFM can fit the rigorous sensor model with a high precision of 1% pixel level. Through the RFM-based block adjustment, the back-projection residuals in image space can be reduced from around 1.5 pixels to sub-pixel., indicating that RFM can replace rigorous sensor model for geometric processing of lunar images.
A numerical model including PID control of a multizone crystal growth furnace
NASA Technical Reports Server (NTRS)
Panzarella, Charles H.; Kassemi, Mohammad
1992-01-01
This paper presents a 2D axisymmetric combined conduction and radiation model of a multizone crystal growth furnace. The model is based on a programmable multizone furnace (PMZF) designed and built at NASA Lewis Research Center for growing high quality semiconductor crystals. A novel feature of this model is a control algorithm which automatically adjusts the power in any number of independently controlled heaters to establish the desired crystal temperatures in the furnace model. The control algorithm eliminates the need for numerous trial and error runs previously required to obtain the same results. The finite element code, FIDAP, used to develop the furnace model, was modified to directly incorporate the control algorithm. This algorithm, which presently uses PID control, and the associated heat transfer model are briefly discussed. Together, they have been used to predict the heater power distributions for a variety of furnace configurations and desired temperature profiles. Examples are included to demonstrate the effectiveness of the PID controlled model in establishing isothermal, Bridgman, and other complicated temperature profies in the sample. Finally, an example is given to show how the algorithm can be used to change the desired profile with time according to a prescribed temperature-time evolution.
Constitutive modelling of evolving flow anisotropy including distortional hardening
Pietryga, Michael P.; Vladimirov, Ivaylo N.; Reese, Stefanie
2011-05-04
The paper presents a new constitutive model for anisotropic metal plasticity that takes into account the expansion or contraction (isotropic hardening), translation (kinematic hardening) and change of shape (distortional hardening) of the yield surface. The experimentally observed region of high curvature ('nose') on the yield surface in the loading direction and flattened shape in the reverse loading direction are modelled here by means of the concept of directional distortional hardening. The modelling of directional distortional hardening is accomplished by means of an evolving fourth-order tensor. The applicability of the model is illustrated by fitting experimental subsequent yield surfaces at finite plastic deformation. Comparisons with test data for aluminium low and high work hardening alloys display a good agreement between the simulation results and the experimental data.
Richardson, David B.; Laurier, Dominique; Schubauer-Berigan, Mary K.; Tchetgen, Eric Tchetgen; Cole, Stephen R.
2014-01-01
Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950–2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950–2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer—a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented. PMID:25245043
Attar-Schwartz, Shalhevet
2015-09-01
Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. PMID:26237053
Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States
ERIC Educational Resources Information Center
Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.
2007-01-01
Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…
Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure
ERIC Educational Resources Information Center
Vandsburger, Etty; Biggerstaff, Marilyn A.
2004-01-01
This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…
A Model of Divorce Adjustment for Use in Family Service Agencies.
ERIC Educational Resources Information Center
Faust, Ruth Griffith
1987-01-01
Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…
ERIC Educational Resources Information Center
Nettles, Saundra Murray; Caughy, Margaret O'Brien; O'Campo, Patricia J.
2008-01-01
Examining recent research on neighborhood influences on child development, this review focuses on social influences on school adjustment in the early elementary years. A model to guide community research and intervention is presented. The components of the model of integrated processes are neighborhoods and their effects on academic outcomes and…
Risk adjustment of Medicare capitation payments using the CMS-HCC model.
Pope, Gregory C; Kautter, John; Ellis, Randall P; Ash, Arlene S; Ayanian, John Z; Lezzoni, Lisa I; Ingber, Melvin J; Levy, Jesse M; Robst, John
2004-01-01
This article describes the CMS hierarchical condition categories (HCC) model implemented in 2004 to adjust Medicare capitation payments to private health care plans for the health expenditure risk of their enrollees. We explain the model's principles, elements, organization, calibration, and performance. Modifications to reduce plan data reporting burden and adaptations for disabled, institutionalized, newly enrolled, and secondary payer subpopulations are discussed. PMID:15493448
Community Influences on Adjustment in First Grade: An Examination of an Integrated Process Model
ERIC Educational Resources Information Center
Caughy, Margaret O'Brien; Nettles, Saundra M.; O'Campo, Patricia J.
2007-01-01
We examined the impact of neighborhood characteristics both directly and indirectly as mediated by parent coaching and the parent/child affective relationship on behavioral and school adjustment in a sample of urban dwelling first graders. We used structural equations modeling to assess model fit and estimate direct, indirect, and total effects of…
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
Cement-aggregate compatibility and structure property relationships including modelling
Jennings, H.M.; Xi, Y.
1993-07-15
The role of aggregate, and its interface with cement paste, is discussed with a view toward establishing models that relate structure to properties. Both short (nm) and long (mm) range structure must be considered. The short range structure of the interface depends not only on the physical distribution of the various phases, but also on moisture content and reactivity of aggregate. Changes that occur on drying, i.e. shrinkage, may alter the structure which, in turn, feeds back to alter further drying and shrinkage. The interaction is dynamic, even without further hydration of cement paste, and the dynamic characteristic must be considered in order to fully understand and model its contribution to properties. Microstructure and properties are two subjects which have been pursued somewhat separately. This review discusses both disciplines with a view toward finding common research goals in the future. Finally, comment is made on possible chemical reactions which may occur between aggregate and cement paste.
Digital elevation model visibility including Earth's curvature and atmosphere refraction
NASA Astrophysics Data System (ADS)
Santossilva, Ewerton; Vieiradias, Luiz Alberto
1990-03-01
There are some instances in which the Earth's curvature and the atmospheric refraction, optical or electronic, are important factors when digital elevation models are used for visibility calculations. This work deals with this subject, suggesting a practical approach to solve this problem. Some examples, from real terrain data, are presented. The equipment used was an IBM-PC like computer with a SITIM graphic card.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Huang, Lam O.; Infante-Rivard, Claire; Labbe, Aurélie
2016-01-01
Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium Test (TDT) was first proposed to test for association and linkage in case-trios (affected offspring and parents); adjusting for TRD using control-trios was recommended. However, the TDT does not provide risk parameter estimates for different genetic models. A loglinear model was later proposed to provide child and maternal relative risk (RR) estimates of disease, assuming Mendelian transmission. Results from our simulation study showed that case-trios RR estimates using this model are biased in the presence of TRD; power and Type 1 error are compromised. We propose an extended loglinear model adjusting for TRD. Under this extended model, RR estimates, power and Type 1 error are correctly restored. We applied this model to an intrauterine growth restriction dataset, and showed consistent results with a previous approach that adjusted for TRD using control-trios. Our findings suggested the need to adjust for TRD in avoiding spurious results. Documenting TRD in the population is therefore essential for the correct interpretation of genetic association studies.
Modeling shelter-in-place including sorption on indoor surfaces
Chan, Wanyu R.; Price, Phillip N.; Gadgil, Ashok J.; Nazaroff, William W.; Loosmore, Gwen A.; Sugiyama, Gayle A.
2003-11-01
Intentional or accidental large-scale airborne toxic releases (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. As part of an emergency response plan, shelter-in-place (SIP) can be an effective response option, especially when evacuation is infeasible. Reasonably tight building envelopes provide some protection against exposure to peak concentrations when toxic release passes over an area. They also provide some protection in terms of cumulative exposure, if SIP is terminated promptly after the outdoor plume has passed. The purpose of this work is to quantify the level of protection offered by existing houses, and the importance of sorption/desorption to and from surfaces on the effectiveness of SIP. We examined a hypothetical chlorine gas release scenario simulated by the National Atmospheric Release Advisory Center (NARAC). We used a standard infiltration model to calculate the distribution of time dependent infiltration rates within each census tract. Large variation in the air tightness of dwellings makes some houses more protective than others. Considering only the median air tightness, model results showed that if sheltered indoors, the total population intake of non-sorbing toxic gas is only 50% of the outdoor level 4 hours from the start of the release. Based on a sorption/desorption model by Karlsson and Huber (1996), we calculated that the sorption process would further lower the total intake of the population by an additional 50%. The potential benefit of SIP can be considerably higher if the comparison is made in terms of health effects because of the non-linear acute effect dose-response curve of many chemical warfare agents and toxic industrial substances.
Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces
Lomov, I; Antoun, T; Vorobiev, O
2009-12-16
Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the
A Model for Axial Magnetic Bearings Including Eddy Currents
NASA Technical Reports Server (NTRS)
Kucera, Ladislav; Ahrens, Markus
1996-01-01
This paper presents an analytical method of modelling eddy currents inside axial bearings. The problem is solved by dividing an axial bearing into elementary geometric forms, solving the Maxwell equations for these simplified geometries, defining boundary conditions and combining the geometries. The final result is an analytical solution for the flux, from which the impedance and the force of an axial bearing can be derived. Several impedance measurements have shown that the analytical solution can fit the measured data with a precision of approximately 5%.
Finite difference modeling of rotor flows including wake effects
NASA Technical Reports Server (NTRS)
Caradonna, F. X.; Desopper, A.; Tung, C.
1982-01-01
Rotary wing finite difference methods are investigated. The main concern is the specification of boundary conditions to properly account for the effect of the wake on the blade. Examples are given of an approach where wake effects are introduced by specifying an equivalent angle of attack. An alternate approach is also given where discrete vortices are introduced into the finite difference grid. The resulting computations of hovering and high advance ratio cases compare well with experiment. Some consideration is also given to the modeling of low to moderate advance ratio flows.
A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada
NASA Astrophysics Data System (ADS)
Simon, K. M.; James, T. S.; Dyke, A. S.
2015-07-01
A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.
Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed
2013-01-01
Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland completed measures of community discord, family relations, and children’s regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members’ reports of current sectarian and non-sectarian antisocial behavior. Interparental conflict and parental monitoring and children’s emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children’s adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world. PMID:20423550
Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed
2010-05-01
Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world. PMID:20423550
Global model including multistep ionizations in helium plasmas
NASA Astrophysics Data System (ADS)
Oh, Seungju; Lee, Hyo-Chang; Chung, Chin-Wook
2015-09-01
Particle and power balance equations including stepwise ionizations are derived and solved in helium plasma. In the balance equations, two metastable states (23S1 in singlet and 21S1 triplet) are considered and followings are obtained. The plasma density linearly increases and electron temperature is relatively in constant value against the absorbed power. It is also found that the contribution to multi-step ionization respect to the single-step ionization is in the range of 8% - 23%, as the gas pressure increases from 10 mTorr to 100 mTorr. There has little variation in the collisional energy loss per electron-ion pair created (Ec). These results indicate that the stepwise ionizations are the minor effect in case of the helium plasma compared to argon plasma. This is because that helium gas has very small collisional cross sections and higher inelastic collision threshold energy resulting in the little variations for the collisional energy loss per electron-ion pair created.
ERIC Educational Resources Information Center
Bernard, Lori L.; Guarnaccia, Charles A.
2003-01-01
Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…
Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L
2016-07-01
The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed. PMID:26251100
ERIC Educational Resources Information Center
Ray, Corey E.; Elliott, Stephen N.
2006-01-01
This study examined the hypothesized relationship between social adjustment, as measured by perceived social support, self-concept, and social skills, and performance on academic achievement tests. Participants included 27 teachers and 77 fourth- and eighth-grade students with diverse academic and behavior competencies. Teachers were asked to…
A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment
ERIC Educational Resources Information Center
Lamborn, Susie D.; Groh, Kelly
2009-01-01
We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…
A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul
2012-01-01
This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…
A Class of Elementary Particle Models Without Any Adjustable Real Parameters
NASA Astrophysics Data System (ADS)
't Hooft, Gerard
2011-12-01
Conventional particle theories such as the Standard Model have a number of freely adjustable coupling constants and mass parameters, depending on the symmetry algebra of the local gauge group and the representations chosen for the spinor and scalar fields. There seems to be no physical principle to determine these parameters as long as they stay within certain domains dictated by the renormalization group. Here however, reasons are given to demand that, when gravity is coupled to the system, local conformal invariance should be a spontaneously broken exact symmetry. The argument has to do with the requirement that black holes obey a complementarity principle relating ingoing observers to outside observers, or equivalently, initial states to final states. This condition fixes all parameters, including masses and the cosmological constant. We suspect that only examples can be found where these are all of order one in Planck units, but the values depend on the algebra chosen. This paper combines findings reported in two previous preprints (G. 't Hooft in arXiv:1009.0669 [gr-qc], 2010; arXiv:1011.0061 [gr-qc], 2010) and puts these in a clearer perspective by shifting the emphasis towards the implications for particle models.
Modeling Fluvial Incision and Transient Landscape Evolution: Influence of Dynamic Channel Adjustment
NASA Astrophysics Data System (ADS)
Attal, M.; Tucker, G. E.; Cowie, P. A.; Whittaker, A. C.; Roberts, G. P.
2007-12-01
Channel geometry exerts a fundamental control on fluvial processes. Recent work has shown that bedrock channel width (W) depends on a number of parameters, including channel slope, and is not only a function of drainage area (A) as is commonly assumed. The present work represents the first attempt to investigate the consequences, for landscape evolution, of using a static expression of channel width (W ~ A0.5) versus a relationship that allows channels to dynamically adjust to changes in slope. We consider different models for the evolution of the channel geometry, including constant width-to-depth ratio (after Finnegan et al., Geology, v. 33, no. 3, 2005), and width-to-depth ratio varying as a function of slope (after Whittaker et al., Geology, v. 35, no. 2, 2007). We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to analyze the response of a catchment to a given tectonic disturbance. The topography of a catchment in the footwall of an active normal fault in the Apennines (Italy) is used as a template for the study. We show that, for this catchment, the transient response can be fairly well reproduced using a simple detachment-limited fluvial incision law. We also show that, depending on the relationship used to express channel width, initial steady-state topographies differ, as do transient channel width, slope, and the response time of the fluvial system. These differences lead to contrasting landscape morphologies when integrated at the scale of a whole catchment. Our results emphasize the importance of channel width in controlling fluvial processes and landscape evolution. They stress the need for using a dynamic hydraulic scaling law when modeling landscape evolution, particularly when the uplift field is non-uniform.
NASA Astrophysics Data System (ADS)
Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.
2015-03-01
Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale, and this limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is due to its simple structure and empirical basis a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial scale applications often rely on coarse data input, which is not compatible with the local scale at which the model is parameterized. This study aimed at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting soil erosion rates to extensive empirical databases on soil erosion from the USA and Europe. Adjusting the topographical factor required scaling of slope according to the fractal method, which resulted in improved topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that are in good comparison with high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted RUSLE model not only shows a global higher mean soil erosion rate but also more variability in the soil erosion rates. Comparison to empirical datasets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional
An improved bundle adjustment model and algorithm with novel block matrix partition method
NASA Astrophysics Data System (ADS)
Xia, Zemin; Li, Zhongwei; Zhong, Kai
2014-11-01
Sparse bundle adjustment is widely applied in computer vision and photogrammetry. However, existing implementation is based on the model of n 3D points projecting onto m different camera imaging planes at m positions, which can't be applied to commonly monocular, binocular or trinocular imaging systems. A novel design and implementation of bundle adjustment algorithm is proposed in this paper, which is based on n 3D points projecting onto the same camera imaging plane at m positions .To improve the performance of the algorithm, a novel sparse block matrix partition method is proposed. Experiments show that the improved bundle adjustment is effective, robust and has a better tolerance to pixel coordinates error.
NASA Astrophysics Data System (ADS)
Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.
2015-09-01
Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale. This limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is, due to its simple structure and empirical basis, a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial-scale applications often rely on coarse data input, which is not compatible with the local scale on which the model is parameterized. Our study aims at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting erosion rates to extensive empirical databases from the USA and Europe. By scaling the slope according to the fractal method to adjust the topographical factor, we managed to improve the topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that compared well to high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted version shows a global higher mean erosion rate and more variability in the erosion rates. Comparison to empirical data sets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional differences with the empirical databases, the results indicate that the
Including operational data in QMRA model: development and impact of model inputs.
Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle
2009-03-01
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk). PMID:18957777
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
Assessment of an adjustment factor to model radar range dependent error
NASA Astrophysics Data System (ADS)
Sebastianelli, S.; Russo, F.; Napolitano, F.; Baldini, L.
2012-09-01
Quantitative radar precipitation estimates are affected by errors determined by many causes such as radar miscalibration, range degradation, attenuation, ground clutter, variability of Z-R relation, variability of drop size distribution, vertical air motion, anomalous propagation and beam-blocking. Range degradation (including beam broadening and sampling of precipitation at an increasing altitude)and signal attenuation, determine a range dependent behavior of error. The aim of this work is to model the range-dependent error through an adjustment factor derived from the G/R ratio trend against the range, where G and R are the corresponding rain gauge and radar rainfall amounts computed at each rain gauge location. Since range degradation and signal attenuation effects are negligible close to the radar, resultsshowthatwithin 40 km from radar the overall range error is independent of the distance from Polar 55C and no range-correction is needed. Nevertheless, up to this distance, the G/R ratiocan showa concave trend with the range, which is due to the melting layer interception by the radar beam during stratiform events.
An appraisal-based coping model of attachment and adjustment to arthritis.
Sirois, Fuschia M; Gick, Mary L
2016-05-01
Guided by pain-related attachment models and coping theory, we used structural equation modeling to test an appraisal-based coping model of how insecure attachment was linked to arthritis adjustment in a sample of 365 people with arthritis. The structural equation modeling analyses revealed indirect and direct associations of anxious and avoidant attachment with greater appraisals of disease-related threat, less perceived social support to deal with this threat, and less coping efficacy. There was evidence of reappraisal processes for avoidant but not anxious attachment. Findings highlight the importance of considering attachment style when assessing how people cope with the daily challenges of arthritis. PMID:24984717
An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model
NASA Astrophysics Data System (ADS)
Purcell, A.; Tregoning, P.; Dehecq, A.
2016-05-01
The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.
Cassidy, Adam R
2016-01-01
The objective of this study was to establish latent executive function (EF) and psychosocial adjustment factor structure, to examine associations between EF and psychosocial adjustment, and to explore potential development differences in EF-psychosocial adjustment associations in healthy children and adolescents. Using data from the multisite National Institutes of Health (NIH) magnetic resonance imaging (MRI) Study of Normal Brain Development, the current investigation examined latent associations between theoretically and empirically derived EF factors and emotional and behavioral adjustment measures in a large, nationally representative sample of children and adolescents (7-18 years old; N = 352). Confirmatory factor analysis (CFA) was the primary method of data analysis. CFA results revealed that, in the whole sample, the proposed five-factor model (Working Memory, Shifting, Verbal Fluency, Externalizing, and Internalizing) provided a close fit to the data, χ(2)(66) = 114.48, p < .001; RMSEA = .046; NNFI = .973; CFI = .980. Significant negative associations were demonstrated between Externalizing and both Working Memory and Verbal Fluency (p < .01) factors. A series of increasingly restrictive tests led to the rejection of the hypothesis of invariance, thereby precluding formal statistical examination of age-related differences in latent EF-psychosocial adjustment associations. Findings indicate that childhood EF skills are best conceptualized as a constellation of interconnected yet distinguishable cognitive self-regulatory skills. Individual differences in certain domains of EF track meaningfully and in expected directions with emotional and behavioral adjustment indices. Externalizing behaviors, in particular, are associated with latent Working Memory and Verbal Fluency factors. PMID:25569593
Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment
NASA Astrophysics Data System (ADS)
Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.
2016-08-01
This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.
NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia
NASA Astrophysics Data System (ADS)
Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas
2016-04-01
Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.
Modeling and controller design of a wind energy conversion system including a matrix converter
NASA Astrophysics Data System (ADS)
Barakati, S. Masoud
In this thesis, a grid-connected wind-energy converter system including a matrix converter is proposed. The matrix converter, as a power electronic converter, is used to interface the induction generator with the grid and control the wind turbine shaft speed. At a given wind velocity, the mechanical power available from a wind turbine is a function of its shaft speed. Through the matrix converter, the terminal voltage and frequency of the induction generator is controlled, based on a constant V/f strategy, to adjust the turbine shaft speed and accordingly, control the active power injected into the grid to track maximum power for all wind velocities. The power factor at the interface with the grid is also controlled by the matrix converter to either ensure purely active power injection into the grid for optimal utilization of the installed wind turbine capacity or assist in regulation of voltage at the point of connection. Furthermore, the reactive power requirements of the induction generator are satisfied by the matrix converter to avoid use of self-excitation capacitors. The thesis addresses two dynamic models: a comprehensive dynamic model for a matrix converter and an overall dynamical model for the proposed wind turbine system. The developed matrix converter dynamic model is valid for both steady-state and transient analyses, and includes all required functions, i.e., control of the output voltage, output frequency, and input displacement power factor. The model is in the qdo reference frame for the matrix converter input and output voltage and current fundamental components. The validity of this model is confirmed by comparing the results obtained from the developed model and a simplified fundamental-frequency equivalent circuit-based model. In developing the overall dynamic model of the proposed wind turbine system, individual models of the mechanical aerodynamic conversion, drive train, matrix converter, and squirrel-cage induction generator are developed
ERIC Educational Resources Information Center
Siman-Tov, Ayelet; Kaniel, Shlomo
2011-01-01
The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…
An assessment of the ICE6G_C (VM5A) glacial isostatic adjustment model
NASA Astrophysics Data System (ADS)
Purcell, Anthony; Tregoning, Paul; Dehecq, Amaury
2016-04-01
The recent release of the next-generation global ice history model, ICE6G_C(VM5a) [Peltier et al., 2015, Argus et al. 2014] is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology and, of course, geodynamics (Earth rheology studies). In this presentation I will assess some aspects of the ICE6G_C(VM5a) model and the accompanying published data sets. I will demonstrate that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Further, the published spherical harmonic coefficients - which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA) - will be shown to contain excessive power for degree ≥ 90, to be physically implausible and to not represent accurately the ICE6G_C(VM5a) model. The excessive power in the high degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. [2011] is applied but, when correct Stokes' coefficients are used, the empirical relationship will be shown to produce excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. [2011]. Finally, a global radial velocity field for the present-day GIA signal, and corresponding Stoke's coefficients will be presented for the ICE6GC ice model history using the VM5a rheology model. These results have been obtained using the ANU group's CALSEA software package and can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals without any of the shortcomings of the previously published data-sets. We denote the new data sets ICE6G_ANU.
NASA Astrophysics Data System (ADS)
Müller, Marc F.; Thompson, Sally E.
2013-10-01
Estimating precipitation over large spatial areas remains a challenging problem for hydrologists. Sparse ground-based gauge networks do not provide a robust basis for interpolation, and the reliability of remote sensing products, although improving, is still imperfect. Current techniques to estimate precipitation rely on combining these different kinds of measurements to correct the bias in the satellite observations. We propose a novel procedure that, unlike existing techniques, (i) allows correcting the possibly confounding effects of different sources of errors in satellite estimates, (ii) explicitly accounts for the spatial heterogeneity of the biases and (iii) allows the use of non overlapping historical observations. The proposed method spatially aggregates and interpolates gauge data at the satellite grid resolution by focusing on parameters that describe the frequency and intensity of the rainfall observed at the gauges. The resulting gridded parameters can then be used to adjust the probability density function of satellite rainfall observations at each grid cell, accounting for spatial heterogeneity. Unlike alternate methods, we explicitly adjust biases on rainfall frequency in addition to its intensity. Adjusted rainfall distributions can then readily be applied as input in stochastic rainfall generators or frequency domain hydrological models. Finally, we also provide a procedure to use them to correct remotely sensed rainfall time series. We apply the method to adjust the distributions of daily rainfall observed by the TRMM satellite in Nepal, which exemplifies the challenges associated with a sparse gauge network and large biases due to complex topography. In a cross-validation analysis on daily rainfall from TRMM 3B42 v6, we find that using a small subset of the available gauges, the proposed method outperforms local rainfall estimations using the complete network of available gauges to directly interpolate local rainfall or correct TRMM by adjusting
Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.
Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu
2015-11-01
Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational
A finite element model updating technique for adjustment of parameters near boundaries
NASA Astrophysics Data System (ADS)
Gwinn, Allen Fort, Jr.
Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged
Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.
Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao
2016-05-18
To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory. PMID:27117814
Adjusting for Network Size and Composition Effects in Exponential-Family Random Graph Models.
Krivitsky, Pavel N; Handcock, Mark S; Morris, Martina
2011-07-01
Exponential-family random graph models (ERGMs) provide a principled way to model and simulate features common in human social networks, such as propensities for homophily and friend-of-a-friend triad closure. We show that, without adjustment, ERGMs preserve density as network size increases. Density invariance is often not appropriate for social networks. We suggest a simple modification based on an offset which instead preserves the mean degree and accommodates changes in network composition asymptotically. We demonstrate that this approach allows ERGMs to be applied to the important situation of egocentrically sampled data. We analyze data from the National Health and Social Life Survey (NHSLS). PMID:21691424
Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction
NASA Astrophysics Data System (ADS)
Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.
2015-12-01
Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.
“A model of mother-child Adjustment in Arab Muslim Immigrants to the US”
Hough, Edythe s; Templin, Thomas N; Kulwicki, Anahid; Ramaswamy, Vidya; Katz, Anne
2009-01-01
We examined the mother-child adjustment and child behavior problems in Arab Muslim immigrant families residing in the U.S.A. The sample of 635 mother-child dyads was comprised of mothers who emigrated from 1989 or later and had at least one early adolescent child between the ages of 11 to 15 years old who was also willing to participate. Arabic speaking research assistants collected the data from the mothers and children using established measures of maternal and child stressors, coping, and social support; maternal distress; parent-child relationship; and child behavior problems. A structural equation model (SEM) was specified a priori with 17 predicted pathways. With a few exceptions, the final SEM model was highly consistent with the proposed model and had a good fit to the data. The model accounted for 67% of the variance in child behavior problems. Child stressors, mother-child relationship, and maternal stressors were the causal variables that contributed the most to child behavior problems. The model also accounted for 27% of the variance in mother-child relationship. Child active coping, child gender, mother’s education, and maternal distress were all predictive of the mother-child relationship. Mother-child relationship also mediated the effects of maternal distress and child active coping on child behavior problems. These findings indicate that immigrant mothers contribute greatly to adolescent adjustment, both as a source of risk and protection. These findings also suggest that intervening with immigrant mothers to reduce their stress and strengthening the parent-child relationship are two important areas for promoting adolescent adjustment. PMID:19758737
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
Observational Constraint of Aerosol Effects on the CMIP5 Inter-model Spread of Adjusted Forcings
NASA Astrophysics Data System (ADS)
Chen, J.; Wennberg, P. O.; Jiang, J. H.; Su, H.; Bordoni, S.
2013-12-01
The simulated global-mean temperature (GMT) change over the past 150 years is quite consistent across CMIP5 climate models and also consistent with the observations. However, the predicted future GMT under the identical CO2 forcing is divergent. This paradox is partly due to the errors in the predicted GMT produced by historical greenhouse gas (GHG) forcing being compensated by the parameterization of aerosol cloud radiative forcing. Historical increases in anthropogenic aerosols exert an overall (but highly uncertain) cooling effect in the climate system, which partially offsets the warming due to well mixed greenhouse gases (WMGHGs). Because aerosol concentrations are predicted to eventually decrease in future scenarios, climate change becomes dominated by warming due to the WMGHG. This change in the relative importance of forcing by aerosol versus WMGHG makes apparent the substantial differences in prediction of climate by WMGHG forcing. Here we investigate the role of aerosols in the context of adjusted forcing changes in the historical runs and the effect of aerosols on the cloud feedback. Our preliminary results suggest that models which are more sensitive to the increase in concentration of CO2 have a larger aerosol radiative cooling effect. By comparing the historicalMisc runs and historicalGHG runs, we find that aerosols exert a potential impact on the cloud adjusted forcings, especially shortwave cloud adjusted forcings. We use the CLIPSO, MISR and CERES data as the benchmark to evaluate the present aerosol simulations. Using satellite observations to assess the relative reliability of the different model responses and to constrain the simulated aerosol radiative forcing will contribute significantly to reducing the across model spread in future climate simulations and identifying some missing physical processes.
Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools
NASA Astrophysics Data System (ADS)
Kollo, Karin; Spada, Giorgio; Vermeer, Martin
2013-04-01
Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every
Elizur, Y; Ziv, M
2001-01-01
While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men. PMID:11444052
Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method
Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa
2010-01-01
Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419
Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani
2016-01-01
We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054
Goeritz, Marie L.; Marder, Eve
2014-01-01
We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. PMID:25008414
Obtaining diverse behaviors in a climate model without the use of flux adjustment
NASA Astrophysics Data System (ADS)
Yamazaki, K.; Rowlands, D. J.; Williamson, D.; Allen, M.
2011-12-01
Efforts have been made in past research to attain a wide range of atmosphere and ocean model behaviors by perturbing the model physics of Global Climate Models. However, obtaining a large spread of behaviors of the ocean model has so far been unsuccessful. In an ongoing project within RAPID-WATCH, physical parameters of HadCM3 have been perturbed within plausible ranges across the Latin-hypercube to generate a 10,000 member ensemble, which have been running on the distributed computing platform of climateprediction.net. In this work we resample and run a second, 20,000 member ensemble of model variants that have been identified not to drift significantly away from a realistic initial base state, a key step since we are not using flux adjustment. To this end, they are conditioned on the diagnosed fluxes from the first ensemble by statistical methods to sample regions of parameter space that are predicted to exhibit low top-of-atmosphere (TOA) flux imbalance. Specifically, we constrain the distribution of outgoing longwave radiation (OLR) and reflected shortwave radiation (RSR) by laying an uncertainty ellipse at the 99% significance level, using the error analysis from Tett et al. (2011), over the standard configuration. In addition, parameters are sampled to generate a wide spread in estimated climate sensitivities, informed by results from a separate, coupled atmosphere-thermodynamic ocean coupled model ensemble. The results from the conditioned ensemble show that its members have successfully attained the distribution of OLR and RSR very similar to those predicted, while exhibiting a wide range of behaviors in both the atmosphere and the ocean. The spread of estimated effective climate sensitivity with the balanced TOA fluxes shows that the range of sensitivities of the conditioned ensemble is substantially smaller than that obtained with flux adjustment, but still as large or larger than the range in an ensemble of opportunity. This confirms that flux adjustment
Cummings, E. Mark; Cheung, Rebecca Y. M.; Koss, Kalsea; Davies, Patrick T.
2014-01-01
Despite calls for process-oriented models for child maladjustment due to heightened marital conflict in the context of parental depressive symptoms, few longitudinal tests of the mechanisms underlying these relations have been conducted. Addressing this gap, the present study examined multiple factors longitudinally that link parental depressive symptoms to adolescent adjustment problems, building on a conceptual model informed by emotional security theory (EST). Participants were 320 families (158 boys, 162 girls), including mothers and fathers, who took part when their children were in kindergarten (T1), second (T2), seventh (T3), eighth (T4) and ninth (T5) grades. Parental depressive symptoms (T1) were related to changes in adolescents’ externalizing and internalizing symptoms (T5), as mediated by parents’ negative emotional expressiveness (T2), marital conflict (T3), and emotional insecurity (T4). Evidence was thus advanced for emotional insecurity as an explanatory process in the context of parental depressive symptoms. PMID:24652484
NASA Astrophysics Data System (ADS)
Mayes, M. A.; Wang, G.; Tang, G.; Xu, X.; Jagadamma, S.
2013-12-01
Carbon cycle models are traditionally parameterized with ad hoc soil pools, empirical decay constants and first-order decomposition as a function of substrate supply. Decomposition of vegetative and faunal inputs, however, involves enzymatically-facilitated depolymerization by the microbial community. Traditional soil models are calibrated to match existing distribution of soil carbon, but they are not parameterized to predict the response of soil carbon to climate change due to microbial community shifts or physiological changes, i.e., acclimation. As an example, we will show how the temperature sensitivity of carbon use efficiency can influence the decomposition of different substrates and affect the release of CO2 from soil organic matter. Acclimation to warmer conditions could also involve shifts in microbial community composition or function, e.g., fungi: bacteria ratio shift. Experimental data is needed to decide how to parameterize models to accommodate functional or compositional changes. We will explore documented cases of microbial acclimation to warming, discuss methods to include microbial acclimation in carbon cycle models, and explore the need for additional experimental data to validate the next generation of microbially-facilitated carbon cycle models.
NASA Astrophysics Data System (ADS)
Nonato, Fábio; Cavalca, Katia L.
2014-12-01
This work presents a methodology for including the Elastohydrodynamic (EHD) film effects to a lateral vibration model of a deep groove ball bearing by using a novel approximation for the EHD contacts by a set of equivalent nonlinear spring and viscous damper. The fitting of the equivalent contact model used the results of a transient multi-level finite difference EHD algorithm to adjust the dynamic parameters. The comparison between the approximated model and the finite difference simulated results showed a suitable representation of the stationary and dynamic contact behaviors. The linear damping hypothesis could be shown as a rough representation of the actual hysteretic behavior of the EHD contact. Nevertheless, the overall accuracy of the model was not impaired by the use of such approximation. Further on, the inclusion of the equivalent EHD contact model is equated for both the restoring and the dissipative components of the bearing's lateral dynamics. The derived model was used to investigate the effects of the rolling element bearing lubrication on the vibration response of a rotor's lumped parameter model. The fluid film stiffening effect, previously only observable by experimentation, could be quantified using the proposed model, as well as the portion of the bearing damping provided by the EHD fluid film. Results from a laboratory rotor-bearing test rig were used to indirectly validate the proposed contact approximation. A finite element model of the rotor accounting for the lubricated bearing formulation adequately portrayed the frequency content of the bearing orbits observed on the test rig.
NASA Astrophysics Data System (ADS)
Wu, Bo; Hu, Han; Guo, Jian
2014-04-01
Lunar topographic information is essential for lunar scientific investigations and exploration missions. Lunar orbiter imagery and laser altimeter data are two major data sources for lunar topographic modeling. Most previous studies have processed the imagery and laser altimeter data separately for lunar topographic modeling, and there are usually inconsistencies between the derived lunar topographic models. This paper presents a novel combined block adjustment approach to integrate multiple strips of the Chinese Chang'E-2 imagery and NASA's Lunar Reconnaissance Orbiter (LRO) Laser Altimeter (LOLA) data for precision lunar topographic modeling. The participants of the combined block adjustment include the orientation parameters of the Chang'E-2 images, the intra-strip tie points derived from the Chang'E-2 stereo images of the same orbit, the inter-strip tie points derived from the overlapping area of two neighbor Chang'E-2 image strips, and the LOLA points. Two constraints are incorporated into the combined block adjustment including a local surface constraint and an orbit height constraint, which are specifically designed to remedy the large inconsistencies between the Chang'E-2 and LOLA data sets. The output of the combined block adjustment is the improved orientation parameters of the Chang'E-2 images and ground coordinates of the LOLA points, from which precision lunar topographic models can be generated. The performance of the developed approach was evaluated using the Chang'E-2 imagery and LOLA data in the Sinus Iridum area and the Apollo 15 landing area. The experimental results revealed that the mean absolute image residuals between the Chang'E-2 image strips were drastically reduced from tens of pixels before the adjustment to sub-pixel level after adjustment. Digital elevation models (DEMs) with 20 m resolution were generated using the Chang'E-2 imagery after the combined block adjustment. Comparison of the Chang'E-2 DEM with the LOLA DEM showed a good
Lehmann, E. D.; Deutsch, T.
1992-01-01
Joe Daniels is a 41 year old, 76kg male, insulin-treated diabetic patient who was diagnosed as being diabetic in 1972, at the age of 22. Joe recently found that he was having hypoglycaemic symptoms. Using self-monitoring blood glucose equipment glycaemic levels below 3.0 mmol/l were recorded at least once a week while hyperglycaemic readings (> 16 mmol/l) were observed 2-3 times per week. Joe came into hospital to have his glycaemic control improved as doctors were concerned about the risks of him suffering a serious hypoglycaemic attack. Using some of the data collected by Joe while in hospital we will demonstrate how a computer model of glucose-insulin interaction in type I diabetes can be used interactively to teach diabetic patients about their diabetes and educate them to adjust their own insulin injections and diet. PMID:1482868
UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA
Davis, S.C.
2000-11-16
The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.
Brown, Grant D; Oleson, Jacob J; Porter, Aaron T
2016-06-01
The various thresholding quantities grouped under the "Basic Reproductive Number" umbrella are often confused, but represent distinct approaches to estimating epidemic spread potential, and address different modeling needs. Here, we contrast several common reproduction measures applied to stochastic compartmental models, and introduce a new quantity dubbed the "empirically adjusted reproductive number" with several advantages. These include: more complete use of the underlying compartmental dynamics than common alternatives, use as a potential diagnostic tool to detect the presence and causes of intensity process underfitting, and the ability to provide timely feedback on disease spread. Conceptual connections between traditional reproduction measures and our approach are explored, and the behavior of our method is examined under simulation. Two illustrative examples are developed: First, the single location applications of our method are established using data from the 1995 Ebola outbreak in the Democratic Republic of the Congo and a traditional stochastic SEIR model. Second, a spatial formulation of this technique is explored in the context of the ongoing Ebola outbreak in West Africa with particular emphasis on potential use in model selection, diagnosis, and the resulting applications to estimation and prediction. Both analyses are placed in the context of a newly developed spatial analogue of the traditional SEIR modeling approach. PMID:26574727
The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment
ERIC Educational Resources Information Center
Borja, Susan E.; Callahan, Jennifer L.
2009-01-01
This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…
NASA Astrophysics Data System (ADS)
Yang, M. F.
In this research we present a stylized model to find the optimal strategy for integrated vendor-buyer inventory model with fuzzy annual demand and fuzzy adjustable production rate. This model with such consideration is based on the total cost optimization under a common stock strategy. However, the supposition of known annual demand and adjustable production rate in most related publications may not be realistic. This paper proposes the triangular fuzzy number of annual demand and adjustable production rate and then employs the signed distance, to find the estimation of the common total cost in the fuzzy sense and derives the corresponding optimal buyer`s quantity consequently and the integer number of lots in which the items are delivered from the vendor to the purchaser. A numerical example is provided and the results of fuzzy and crisp models are compared.
ERIC Educational Resources Information Center
Fanti, Kostas A.; Henrich, Christopher C.; Brookmeyer, Kathryn A.; Kuperminc, Gabriel P.
2008-01-01
The present study includes externalizing problems, internalizing problems, mother-adolescent relationship quality, and father-adolescent relationship quality in the same structural equation model and tests the longitudinal reciprocal association among all four variables over a 1-year period. A transactional model in which adolescents'…
NASA Astrophysics Data System (ADS)
Root, Bart; Tarasov, Lev; van der Wal, Wouter
2014-05-01
The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.
The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)
2001-01-01
A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.
Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analys...
Testing the compatibility of constraints for parameters of a geodetic adjustment model
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Neitzel, Frank
2013-06-01
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526-547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line.
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
2003-01-01
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S
2006-05-01
This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process. PMID:17296040
Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.
Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint
Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.
2015-04-06
Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.
Glacial isostatic adjustment in Fennoscandia from GRACE data and comparison with geodynamical models
NASA Astrophysics Data System (ADS)
Steffen, Holger; Denker, Heiner; Müller, Jürgen
2008-10-01
The Earth's gravity field observed by the Gravity Recovery and Climate Experiment (GRACE) satellite mission shows variations due to the integral effect of mass variations in the atmosphere, hydrosphere and geosphere. Several institutions, such as the GeoForschungsZentrum (GFZ) Potsdam, the University of Texas at Austin, Center for Space Research (CSR) and the Jet Propulsion Laboratory (JPL), Pasadena, provide GRACE monthly solutions, which differ slightly due to the application of different reduction models and centre-specific processing schemes. The GRACE data are used to investigate the mass variations in Fennoscandia, an area which is strongly influenced by glacial isostatic adjustment (GIA). Hence the focus is set on the computation of secular trends. Different filters (e.g. isotropic and non-isotropic filters) are discussed for the removal of high frequency noise to permit the extraction of the GIA signal. The resulting GRACE based mass variations are compared to global hydrology models (WGHM, LaDWorld) in order to (a) separate possible hydrological signals and (b) validate the hydrology models with regard to long period and secular components. In addition, a pattern matching algorithm is applied to localise the uplift centre, and finally the GRACE signal is compared with the results from a geodynamical modelling. The GRACE data clearly show temporal gravity variations in Fennoscandia. The secular variations are in good agreement with former studies and other independent data. The uplift centre is located over the Bothnian Bay, and the whole uplift area comprises the Scandinavian Peninsula and Finland. The secular variations derived from the GFZ, CSR and JPL monthly solutions differ up to 20%, which is not statistically significant, and the largest signal of about 1.2 μGal/year is obtained from the GFZ solution. Besides the GIA signal, two peaks with positive trend values of about 0.8 μGal/year exist in central eastern Europe, which are not GIA-induced, and
NASA Astrophysics Data System (ADS)
Menna, F.; Nocerino, E.; Troisi, S.; Remondino, F.
2015-04-01
The surveying and 3D modelling of objects that extend both below and above the water level, such as ships, harbour structures, offshore platforms, are still an open issue. Commonly, a combined and simultaneous survey is the adopted solution, with acoustic/optical sensors respectively in underwater and in air (most common) or optical/optical sensors both below and above the water level. In both cases, the system must be calibrated and a ship is to be used and properly equipped with also a navigation system for the alignment of sequential 3D point clouds. Such a system is usually highly expensive and has been proved to work with still structures. On the other hand for free floating objects it does not provide a very practical solution. In this contribution, a flexible, low-cost alternative for surveying floating objects is presented. The method is essentially based on photogrammetry, employed for surveying and modelling both the emerged and submerged parts of the object. Special targets, named Orientation Devices, are specifically designed and adopted for the successive alignment of the two photogrammetric models (underwater and in air). A typical scenario where the proposed procedure can be particularly suitable and effective is the case of a ship after an accident whose damaged part is underwater and necessitate to be measured (Figure 1). The details of the mathematical procedure are provided in the paper, together with a critical explanation of the results obtained from the adoption of the method for the survey of a small pleasure boat in floating condition.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Glacial isostatic adjustment on 3-D Earth models: a finite-volume formulation
NASA Astrophysics Data System (ADS)
Latychev, Konstantin; Mitrovica, Jerry X.; Tromp, Jeroen; Tamisiea, Mark E.; Komatitsch, Dimitri; Christara, Christina C.
2005-05-01
We describe and present results from a finite-volume (FV) parallel computer code for forward modelling the Maxwell viscoelastic response of a 3-D, self-gravitating, elastically compressible Earth to an arbitrary surface load. We implement a conservative, control volume discretization of the governing equations using a tetrahedral grid in Cartesian geometry and a low-order, linear interpolation. The basic starting grid honours all major radial discontinuities in the Preliminary Reference Earth Model (PREM), and the models are permitted arbitrary spatial variations in viscosity and elastic parameters. These variations may be either continuous or discontinuous at a set of grid nodes forming a 3-D surface within the (regional or global) modelling domain. In the second part of the paper, we adopt the FV methodology and a spherically symmetric Earth model to generate a suite of predictions sampling a broad class of glacial isostatic adjustment (GIA) data types (3-D crustal motions, long-wavelength gravity anomalies). These calculations, based on either a simple disc load history or a global Late Pleistocene ice load reconstruction (ICE-3G), are benchmarked against predictions generated using the traditional normal-mode approach to GIA. The detailed comparison provides a guide for future analyses (e.g. what grid resolution is required to obtain a specific accuracy?) and it indicates that discrepancies in predictions of 3-D crustal velocities less than 0.1 mm yr-1 are generally obtainable for global grids with ~3 × 106 nodes; however, grids of higher resolution are required to predict large-amplitude (>1 cm yr-1) radial velocities in zones of peak post-glacial uplift (e.g. James bay) to the same level of absolute accuracy. We conclude the paper with a first application of the new formulation to a 3-D problem. Specifically, we consider the impact of mantle viscosity heterogeneity on predictions of present-day 3-D crustal motions in North America. In these tests, the
Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model
Konya, Andras
2006-12-15
The purpose of the study was to compare two similar foreign body retrieval devices, the Texan{sup TM} (TX) and the Texan LONGhorn{sup TM} (TX-LG), in a swine model. Both devices feature a {<=}30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean {+-} SD, 88 {+-} 106 sec for TX vs 67 {+-} 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces.
Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia
NASA Astrophysics Data System (ADS)
van der Wal, Wouter; Barnhoorn, Auke; Stocchi, Paolo; Gradmann, Sofie; Wu, Patrick; Drury, Martyn; Vermeersen, Bert
2013-07-01
Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes a single deformation mechanism for mantle rocks. Such a simplified viscosity profile makes it hard to compare the inferred mantle rheology to inferences from seismology and laboratory experiments. It is unknown what constraints GIA observations can provide on more realistic mantle rheology with an ice history that is not based on an a priori mantle viscosity profile. This paper investigates a model for GIA with a new ice history for Fennoscandia that is constrained by palaeoclimate proxies and glacial sediments. Diffusion and dislocation creep flow law data are taken from a compilation of laboratory measurements on olivine. Upper-mantle temperature data sets down to 400 km depth are derived from surface heatflow measurements, a petrochemical model for Fennoscandia and seismic velocity anomalies. Creep parameters below 400 km are taken from an earlier study and are only varying with depth. The olivine grain size and water content (a wet state, or a dry state) are used as free parameters. The solid Earth response is computed with a global spherical 3-D finite-element model for an incompressible, self-gravitating Earth. We compare predictions to sea level data and GPS uplift rates in Fennoscandia. The objective is to see if the mantle rheology and the ice model is consistent with GIA observations. We also test if the inclusion of dislocation creep gives any improvements over predictions with diffusion creep only, and whether the laterally varying temperatures result in an improved fit compared to a widely used 1-D viscosity profile (VM2). We find that sea level data can be explained with our ice model and with information on mantle rheology from laboratory experiments
Jen, Min-Hua; Bottle, Alex; Kirkwood, Graham; Johnston, Ron; Aylin, Paul
2011-09-01
We have previously described a system for monitoring a number of healthcare outcomes using case-mix adjustment models. It is desirable to automate the model fitting process in such a system if monitoring covers a large number of outcome measures or subgroup analyses. Our aim was to compare the performance of three different variable selection strategies: "manual", "automated" backward elimination and re-categorisation, and including all variables at once, irrespective of their apparent importance, with automated re-categorisation. Logistic regression models for predicting in-hospital mortality and emergency readmission within 28 days were fitted to an administrative database for 78 diagnosis groups and 126 procedures from 1996 to 2006 for National Health Services hospital trusts in England. The performance of models was assessed with Receiver Operating Characteristic (ROC) c statistics, (measuring discrimination) and Brier score (assessing the average of the predictive accuracy). Overall, discrimination was similar for diagnoses and procedures and consistently better for mortality than for emergency readmission. Brier scores were generally low overall (showing higher accuracy) and were lower for procedures than diagnoses, with a few exceptions for emergency readmission within 28 days. Among the three variable selection strategies, the automated procedure had similar performance to the manual method in almost all cases except low-risk groups with few outcome events. For the rapid generation of multiple case-mix models we suggest applying automated modelling to reduce the time required, in particular when examining different outcomes of large numbers of procedures and diseases in routinely collected administrative health data. PMID:21556848
PAL-DS MODEL: THE PAL MODEL INCLUDING DEPOSITION AND SEDIMENTATION. USER'S GUIDE
PAL is an acronym for an air quality model which applies a Gaussian plume diffusion algorithm to point, area, and line sources. The model is available from the U.S. Environmental Protection Agency and can be used for estimating hourly and short-term average concentrations of non-...
NASA Astrophysics Data System (ADS)
Addor, Nans; Rohrer, Marco; Furrer, Reinhard; Seibert, Jan
2016-03-01
Bias adjustment methods usually do not account for the origins of biases in climate models and instead perform empirical adjustments. Biases in the synoptic circulation are for instance often overlooked when postprocessing regional climate model (RCM) simulations driven by general circulation models (GCMs). Yet considering atmospheric circulation helps to establish links between the synoptic and the regional scale, and thereby provides insights into the physical processes leading to RCM biases. Here we investigate how synoptic circulation biases impact regional climate simulations and influence our ability to mitigate biases in precipitation and temperature using quantile mapping. We considered 20 GCM-RCM combinations from the ENSEMBLES project and characterized the dominant atmospheric flow over the Alpine domain using circulation types. We report in particular a systematic overestimation of the frequency of westerly flow in winter. We show that it contributes to the generalized overestimation of winter precipitation over Switzerland, and this wet regional bias can be reduced by improving the simulation of synoptic circulation. We also demonstrate that statistical bias adjustment relying on quantile mapping is sensitive to circulation biases, which leads to residual errors in the postprocessed time series. Overall, decomposing GCM-RCM time series using circulation types reveals connections missed by analyses relying on monthly or seasonal values. Our results underscore the necessity to better diagnose process misrepresentation in climate models to progress with bias adjustment and impact modeling.
Bazzazian, S; Besharat, M A
2012-01-01
The aim of this study was to develop and test a model of adjustment to type I diabetes. Three hundred young adults (172 females and 128 males) with type I diabetes were asked to complete the Adult Attachment Inventory (AAI), the Brief Illness Perception Questionnaire (Brief IPQ), Task-oriented subscale of the Coping Inventory for Stressful Situations (CISS), D-39, and well-being subscale of the Mental Health Inventory (MHI). HbA1c was obtained from laboratory examination. Results from structural equation analysis partly supported the hypothesized model. Secure and avoidant attachment styles were found to have effects on illness perception, ambivalent attachment style did not have significant effect on illness perception. Three attachment styles had significant effect on task-oriented coping strategy. Avoidant attachment had negative direct effect on adjustment too. Regression effects of illness perception and task-oriented coping strategy on adjustment were positive. Therefore, positive illness perception and more usage of task-oriented coping strategy predict better adjustment to diabetes. So, the results confirmed the theoretical bases and empirical evidence of effectiveness of attachment styles in adjustment to chronic disease and can be helpful in devising preventive policies, determining high-risk maladjusted patients, and planning special psychological treatment. PMID:21678193
NASA Astrophysics Data System (ADS)
Vannière, Benoît; Guilyardi, Eric; Toniazzo, Thomas; Madec, Gurvan; Woolnough, Steve
2014-10-01
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which
NASA Astrophysics Data System (ADS)
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.
2015-12-01
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.
2015-12-15
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.
Extension of the ADC Charge-Collection Model to Include Multiple Junctions
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
2011-01-01
The ADC model is a charge-collection model derived for simple p-n junction silicon diodes having a single reverse-biased p-n junction at one end and an ideal substrate contact at the other end. The present paper extends the model to include multiple junctions, and the goal is to estimate how collected charge is shared by the different junctions.
Kargo, William J.; Ramakrishnan, Arun; Hart, Corey B.; Rome, Lawrence C.
2010-01-01
Spinal circuits may organize trajectories using pattern generators and synergies. In frogs, prior work supports fixed-duration pulses of fixed composition synergies, forming primitives. In wiping behaviors, spinal frogs adjust their motor activity according to the starting limb position and generate fairly straight and accurate isochronous trajectories across the workspace. To test whether a compact description using primitives modulated by proprioceptive feedback could reproduce such trajectory formation, we built a biomechanical model based on physiological data. We recorded from hindlimb muscle spindles to evaluate possible proprioceptive input. As movement was initiated, early skeletofusimotor activity enhanced many muscle spindles firing rates. Before movement began, a rapid estimate of the limb position from simple combinations of spindle rates was possible. Three primitives were used in the model with muscle compositions based on those observed in frogs. Our simulations showed that simple gain and phase shifts of primitives based on published feedback mechanisms could generate accurate isochronous trajectories and motor patterns that matched those observed. Although on-line feedback effects were omitted from the model after movement onset, our primitive-based model reproduced the wiping behavior across a range of starting positions. Without modifications from proprioceptive feedback, the model behaviors missed the target in a manner similar to that in deafferented frogs. These data show how early proprioception might be used to make a simple estimate initial limb state and to implicitly plan a movement using observed spinal motor primitives. Simulations showed that choice of synergy composition played a role in this simplicity. To generate froglike trajectories, a hip flexor synergy without sartorius required motor patterns with more proprioceptive knee flexor control than did patterns built with a more natural synergy including sartorius. Such synergy
ERIC Educational Resources Information Center
Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.
2010-01-01
The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…
The Effectiveness of the Strength-Centered Career Adjustment Model for Dual-Career Women in Taiwan
ERIC Educational Resources Information Center
Wang, Yu-Chen; Tien, Hsiu-Lan Shelley
2011-01-01
The authors investigated the effectiveness of a Strength-Centered Career Adjustment Model for dual-career women (N = 28). Fourteen women in the experimental group received strength-centered career counseling for 6 to 8 sessions; the 14 women in the control group received test services in 1 to 2 sessions. All participants completed the Personal…
ERIC Educational Resources Information Center
Hawkins, Amy L.; Haskett, Mary E.
2014-01-01
Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…
Modelling Mediterranean agro-ecosystems by including agricultural trees in the LPJmL model
NASA Astrophysics Data System (ADS)
Fader, M.; von Bloh, W.; Shi, S.; Bondeau, A.; Cramer, W.
2015-11-01
In the Mediterranean region, climate and land use change are expected to impact on natural and agricultural ecosystems by warming, reduced rainfall, direct degradation of ecosystems and biodiversity loss. Human population growth and socioeconomic changes, notably on the eastern and southern shores, will require increases in food production and put additional pressure on agro-ecosystems and water resources. Coping with these challenges requires informed decisions that, in turn, require assessments by means of a comprehensive agro-ecosystem and hydrological model. This study presents the inclusion of 10 Mediterranean agricultural plants, mainly perennial crops, in an agro-ecosystem model (Lund-Potsdam-Jena managed Land - LPJmL): nut trees, date palms, citrus trees, orchards, olive trees, grapes, cotton, potatoes, vegetables and fodder grasses. The model was successfully tested in three model outputs: agricultural yields, irrigation requirements and soil carbon density. With the development presented in this study, LPJmL is now able to simulate in good detail and mechanistically the functioning of Mediterranean agriculture with a comprehensive representation of ecophysiological processes for all vegetation types (natural and agricultural) and in a consistent framework that produces estimates of carbon, agricultural and hydrological variables for the entire Mediterranean basin. This development paves the way for further model extensions aiming at the representation of alternative agro-ecosystems (e.g. agroforestry), and opens the door for a large number of applications in the Mediterranean region, for example assessments of the consequences of land use transitions, the influence of management practices and climate change impacts.
Modelling Mediterranean agro-ecosystems by including agricultural trees in the LPJmL model
NASA Astrophysics Data System (ADS)
Fader, M.; von Bloh, W.; Shi, S.; Bondeau, A.; Cramer, W.
2015-06-01
Climate and land use change in the Mediterranean region is expected to affect natural and agricultural ecosystems by decreases in precipitation, increases in temperature as well as biodiversity loss and anthropogenic degradation of natural resources. Demographic growth in the Eastern and Southern shores will require increases in food production and put additional pressure on agro-ecosystems and water resources. Coping with these challenges requires informed decisions that, in turn, require assessments by means of a comprehensive agro-ecosystem and hydrological model. This study presents the inclusion of 10 Mediterranean agricultural plants, mainly perennial crops, in an agro-ecosystem model (LPJmL): nut trees, date palms, citrus trees, orchards, olive trees, grapes, cotton, potatoes, vegetables and fodder grasses. The model was successfully tested in three model outputs: agricultural yields, irrigation requirements and soil carbon density. With the development presented in this study, LPJmL is now able to simulate in good detail and mechanistically the functioning of Mediterranean agriculture with a comprehensive representation of ecophysiological processes for all vegetation types (natural and agricultural) and in a consistent framework that produces estimates of carbon, agricultural and hydrological variables for the entire Mediterranean basin. This development pave the way for further model extensions aiming at the representation of alternative agro-ecosystems (e.g. agroforestry), and opens the door for a large number of applications in the Mediterranean region, for example assessments on the consequences of land use transitions, the influence of management practices and climate change impacts.
A constitutive model for the forces of a magnetic bearing including eddy currents
NASA Technical Reports Server (NTRS)
Taylor, D. L.; Hebbale, K. V.
1993-01-01
A multiple magnet bearing can be developed from N individual electromagnets. The constitutive relationships for a single magnet in such a bearing is presented. Analytical expressions are developed for a magnet with poles arranged circumferencially. Maxwell's field equations are used so the model easily includes the effects of induced eddy currents due to the rotation of the journal. Eddy currents must be included in any dynamic model because they are the only speed dependent parameter and may lead to a critical speed for the bearing. The model is applicable to bearings using attraction or repulsion.
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
A self-adjusting flow dependent formulation for the classical Smagorinsky model coefficient
NASA Astrophysics Data System (ADS)
Ghorbaniasl, G.; Agnihotri, V.; Lacor, C.
2013-05-01
In this paper, we propose an efficient formula for estimating the model coefficient of a Smagorinsky model based subgrid scale eddy viscosity. The method allows vanishing eddy viscosity through a vanishing model coefficient in regions where the eddy viscosity should be zero. The advantage of this method is that the coefficient of the subgrid scale model is a function of the flow solution, including the translational and the rotational velocity field contributions. Furthermore, the value of model coefficient is optimized without using the dynamic procedure thereby saving significantly on computational cost. In addition, the method guarantees the model coefficient to be always positive with low fluctuation in space and time. For validation purposes, three test cases are chosen: (i) a fully developed channel flow at {mathopRenolimits} _tau = 180, 395, (ii) a fully developed flow through a rectangular duct of square cross section at {mathopRenolimits} _tau = 300, and (iii) a smooth subcritical flow past a stationary circular cylinder, at a Reynolds number of {mathopRenolimits} = 3900, where the wake is fully turbulent but the cylinder boundary layers remain laminar. A main outcome is the good behavior of the proposed model as compared to reference data. We have also applied the proposed method to a CT-based simplified human upper airway model, where the flow is transient.
Including latent and sensible heat fluxes from sea spray in global weather and climate models
NASA Astrophysics Data System (ADS)
Copsey, Dan
2016-04-01
Most standard weather and climate models calculate interfacial latent (evaporation) and sensible heat fluxes over the ocean based on parameterisations of atmospheric turbulence, using the wave state only in the calculation of surface roughness length. They ignore latent and sensible heat fluxes generated by sea spray, which is an acceptable assumption at low wind speeds. However at high wind speeds (> 15 m/s) a significant amount of sea spray is generated from the sea surface which, while airborne, cools to an equilibrium temperature, absorbs heat and releases moisture before re-impacting the sea surface. This could impact, for example, the total heat loss from the Southern Ocean (which is anomalously warm in Met Office coupled models) or the accuracy of tropical cyclone forecasts. A modified version of the Fairall sea spray parameterisation scheme has been tested in the Met Office Unified Model including the JULES surface exchange model in both climate and NWP mode. The fast part of the scheme models the temperature change of the droplets to an equilibrium temperature and the slow part of the scheme models the evaporation and heat absorption while the droplets remain airborne. Including this scheme in the model cools and moistens the near surface layers of the atmosphere during high wind events, including tropical cyclones. Sea spray goes on to increase the convection intensity and precipitation near the high wind events in the model.
Modification of TOUGH2 to Include the Dusty Gas Model for Gas Diffusion
WEBB, STEPHEN W.
2001-10-01
The GEO-SEQ Project is investigating methods for geological sequestration of CO{sub 2}. This project, which is directed by LBNL and includes a number of other industrial, university, and national laboratory partners, is evaluating computer simulation methods including TOUGH2 for this problem. The TOUGH2 code, which is a widely used code for flow and transport in porous and fractured media, includes simplified methods for gas diffusion based on a direct application of Fick's law. As shown by Webb (1998) and others, the Dusty Gas Model (DGM) is better than Fick's Law for modeling gas-phase diffusion in porous media. In order to improve gas-phase diffusion modeling for the GEO-SEQ Project, the EOS7R module in the TOUGH2 code has been modified to include the Dusty Gas Model as documented in this report. In addition, the liquid diffusion model has been changed from a mass-based formulation to a mole-based model. Modifications for separate and coupled diffusion in the gas and liquid phases have also been completed. The results from the DGM are compared to the Fick's law behavior for TCE and PCE diffusion across a capillary fringe. The differences are small due to the relatively high permeability (k = 10{sup -11} m{sup 2}) of the problem and the small mole fraction of the gases. Additional comparisons for lower permeabilities and higher mole fractions may be useful.
Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand
NASA Astrophysics Data System (ADS)
Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat
2016-07-01
The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the
McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D
1999-10-20
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design. PMID:18324169
Gleeson, Michael R.; Sheridan, John T.
2009-09-15
The photochemical processes present during free-radical-based holographic grating formation are examined. A kinetic model is presented, which includes, in a more nearly complete and physically realistic way, most of the major photochemical and nonlocal photopolymerization-driven diffusion effects. These effects include: (i) non-steady-state kinetics (ii) spatially and temporally nonlocal polymer chain growth (iii) time varying photon absorption (iv) diffusion controlled viscosity effects (v) multiple termination mechanisms, and (vi) inhibition. The convergence of the predictions of the resulting model is then examined. Comparisons with experimental results are carried out in Part II of this series of papers [J. Opt. Soc. Am. B 26, 1746 (2009)].
SAMI2-PE: A model of the ionosphere including multistream interhemispheric photoelectron transport
NASA Astrophysics Data System (ADS)
Varney, R. H.; Swartz, W. E.; Hysell, D. L.; Huba, J. D.
2012-06-01
In order to improve model comparisons with recently improved incoherent scatter radar measurements at the Jicamarca Radio Observatory we have added photoelectron transport and energy redistribution to the two dimensional SAMI2 ionospheric model. The photoelectron model uses multiple pitch angle bins, includes effects associated with curved magnetic field lines, and uses an energy degradation procedure which conserves energy on coarse, non-uniformly spaced energy grids. The photoelectron model generates secondary electron production rates and thermal electron heating rates which are then passed to the fluid equations in SAMI2. We then compare electron and ion temperatures and electron densities of this modified SAMI2 model with measurements of these parameters over a range of altitudes from 90 km to 1650 km (L = 1.26) over a 24 hour period. The new electron heating model is a significant improvement over the semi-empirical model used in SAMI2. The electron temperatures above the F-peak from the modified model qualitatively reproduce the shape of the measurements as functions of time and altitude and quantitatively agree with the measurements to within ˜30% or better during the entire day, including during the rapid temperature increase at dawn.
Bordas, R.; Gillow, K.; Lou, Q.; Efimov, I. R.; Gavaghan, D.; Kohl, P.; Grau, V.; Rodriguez, B.
2011-01-01
The function of the ventricular specialized conduction system in the heart is to ensure the coordinated electrical activation of the ventricles. It is therefore critical to the overall function of the heart, and has also been implicated as an important player in various diseases, including lethal ventricular arrhythmias such as ventricular fibrillation and drug-induced torsades de pointes. However, current ventricular models of electrophysiology usually ignore, or include highly simplified representations of the specialized conduction system. Here, we describe the development of a image-based, species-consistent, anatomically-detailed model of rabbit ventricular electrophysiology that incorporates a detailed description of the free-running part of the specialized conduction system. Techniques used for the construction of the geometrical model of the specialized conduction system from a magnetic resonance dataset and integration of the system model into a ventricular anatomical model, developed from the same dataset, are described. Computer simulations of rabbit ventricular electrophysiology are conducted using the novel anatomical model and rabbit-specific membrane kinetics to investigate the importance of the components and properties of the conduction system in determining ventricular function under physiological conditions. Simulation results are compared to panoramic optical mapping experiments for model validation and results interpretation. Full access is provided to the anatomical models developed in this study. PMID:21672547
Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Ishii, Hiroaki
2009-01-01
This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
NASA Astrophysics Data System (ADS)
Hincapié, Doracelly; Ospina, Juan
2014-06-01
Recently, a mathematical model of pandemic influenza was proposed including typical control strategies such as antivirals, vaccination and school closure; and considering explicitly the effects of immunity acquired from the early outbreaks on the ulterior outbreaks of the disease. In such model the algebraic expression for the basic reproduction number (without control strategies) and the effective reproduction number (with control strategies) were derived and numerically estimated. A drawback of this model of pandemic influenza is that it ignores the effects of the differential susceptibility due to immunosuppression and the effects of the complexity of the actual contact networks between individuals. We have developed a generalized model which includes such effects of heterogeneity. Specifically we consider the influence of the air network connectivity in the spread of pandemic influenza and the influence of the immunosuppresion when the population is divided in two immune classes. We use an algebraic expression, namely the Tutte polynomial, to characterize the complexity of the contact network. Until now, The influence of the air network connectivity in the spread of pandemic influenza has been studied numerically, but not algebraic expressions have been used to summarize the level of network complexity. The generalized model proposed here includes the typical control strategies previously mentioned (antivirals, vaccination and school closure) combined with restrictions on travel. For the generalized model the corresponding reproduction numbers will be algebraically computed and the effect of the contact network will be established in terms of the Tutte polynomial of the network.
Conger, R D; Patterson, G R; Ge, X
1995-02-01
In this study of parental stress and adolescent adjustment, experiences of negative life events during the recent past were used to generate a measure of acute stress. In addition, multiple indicators based on reports from various informants were used to estimate latent constructs for parental depression, discipline practices, and adolescent adjustment. Employing 2 independent samples of families from 2 different regions of the country (rural Iowa and a medium-sized city in Oregon), structural equation models were used to test the hypothesis that in intact families acute stress experienced by parents is linked to boys' adjustment (average age equaled 11.8 years in the Oregon sample, 12.7 years in the Iowa sample) through 2 different causal mechanisms. The findings showed that parental stress was related to adjustment through stress-related parental depression that is, in turn, correlated with disrupted discipline practices. Poor discipline appears to provide the direct link with developmental outcomes. The structural equation model (SEM) used to test the proposed mediational process was consistent with the data for mothers and boys from both the Oregon and the Iowa samples. The similarity in results was less clear for fathers and boys. Implications of these results for future replication studies are discussed. PMID:7497831
Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J
2016-07-01
The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment. PMID:26759225
A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models
ERIC Educational Resources Information Center
Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu
2012-01-01
This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…
ERIC Educational Resources Information Center
Donaldson, Tarryn; Earl, Joanne K.; Muratore, Alexa M.
2010-01-01
Extending earlier research, this study explores individual (e.g. demographic and health characteristics), psychosocial (e.g. mastery and planning) and organizational factors (e.g. conditions of workforce exit) influencing retirement adjustment. Survey data were collected from 570 semi-retired and retired men and women aged 45 years and older.…
ERIC Educational Resources Information Center
Wong, Jessica Y.; Earl, Joanne K.
2009-01-01
This cross-sectional study examines three predictors of retirement adjustment: individual (demographic and health), psychosocial (work centrality), and organizational (conditions of workforce exit). It also examines the effect of work centrality on post-retirement activity levels. Survey data was collected from 394 retirees (aged 45-93 years).…
Divorce Stress and Adjustment Model: Locus of Control and Demographic Predictors.
ERIC Educational Resources Information Center
Barnet, Helen Smith
This study depicts the divorce process over three time periods: predivorce decision phase, divorce proper, and postdivorce. Research has suggested that persons with a more internal locus of control experience less intense and shorter intervals of stress during the divorce proper and better postdivorce adjustment than do persons with a more…
ERIC Educational Resources Information Center
Sonderegger, Robi; Barrett, Paula M.; Creed, Peter A.
2004-01-01
Building on previous cultural adjustment profile work by Sonderegger and Barrett (2004), the aim of this study was to propose an organised structure for a number of single risk factors that have been linked to acculturative-stress in young migrants. In recognising that divergent situational characteristics (e.g., school level, gender, residential…
ERIC Educational Resources Information Center
Asberg, Kia K.; Bowers, Clint; Renk, Kimberly; McKinney, Cliff
2008-01-01
Today's society puts constant demands on the time and resources of all individuals, with the resulting stress promoting a decline in psychological adjustment. Emerging adults are not exempt from this experience, with an alarming number reporting excessive levels of stress and stress-related problems. As a result, the present study addresses the…
Barks, C.S.
1995-01-01
Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of
Modelling total duration of traffic incidents including incident detection and recovery time.
Tavassoli Hojati, Ahmad; Ferreira, Luis; Washington, Simon; Charles, Phil; Shobeirinejad, Ameneh
2014-10-01
Traffic incidents are key contributors to non-recurrent congestion, potentially generating significant delay. Factors that influence the duration of incidents are important to understand so that effective mitigation strategies can be implemented. To identify and quantify the effects of influential factors, a methodology for studying total incident duration based on historical data from an 'integrated database' is proposed. Incident duration models are developed using a selected freeway segment in the Southeast Queensland, Australia network. The models include incident detection and recovery time as components of incident duration. A hazard-based duration modelling approach is applied to model incident duration as a function of a variety of factors that influence traffic incident duration. Parametric accelerated failure time survival models are developed to capture heterogeneity as a function of explanatory variables, with both fixed and random parameters specifications. The analysis reveals that factors affecting incident duration include incident characteristics (severity, type, injury, medical requirements, etc.), infrastructure characteristics (roadway shoulder availability), time of day, and traffic characteristics. The results indicate that event type durations are uniquely different, thus requiring different responses to effectively clear them. Furthermore, the results highlight the presence of unobserved incident duration heterogeneity as captured by the random parameter models, suggesting that additional factors need to be considered in future modelling efforts. PMID:24974360
Mechanical Modeling of Foods Including Fracture and Simulation of Food Compression
NASA Astrophysics Data System (ADS)
Morimoto, Masamichi; Mizunuma, Hiroshi; Sonomura, Mitsuhiro; Kohyama, Kaoru; Ogoshi, Hiro
2008-07-01
The purposes of this research are to simulate the swallowing of foods, and to investigate the relationship between the rheological properties of foods and the swallowing. Here we proposed the mechanical modeling of foods, and simulated the compression test using the finite element method. A linear plasticity model was applied as the rheological model of the foods, and two types of computational elements were used to simulate the fracture behavior. The compression tests with a wedged plunger were simulated for tofu, banana, and biscuit, and were compared with the experimental results. Other than the homogeneous food model, the simulations were conducted for the multi-layer models. Reasonable agreements on the behaviors of compression and fracture were obtained between the simulations and the experiments including the reaction forces on the plunger.
Magnetofluid Simulations of the Global Solar Wind Including Pickup Ions and Turbulence Modeling
NASA Technical Reports Server (NTRS)
Goldstein, Melvyn L.; Usmanov, Arcadi V.; Matthaeus, William H.
2011-01-01
I will describe a three-dimensional magnetohydrodynamic model of the solar wind that takes into account turbulent heating of the wind by velocity and magnetic fluctuations as well as a variety of effects produced by interstellar pickup protons. In this report, the interstellar pickup protons are treated as one fluid and the protons and electrons are treated together as a second fluid. The model equations include a Reynolds decomposition of the plasma velocity and magnetic field into mean and fluctuating quantities, as well as energy transfer from interstellar pickup protons to solar wind protons that results in the deceleration of the solar wind. The model is used to simulate the global steady-state structure of the solar wind in the region from 0.3 to 100 AU. Where possible, the model is compared with Voyager data. Initial results from generalization to a three-fluid model is described elsewhere in this session.
A statistical model including age to predict passenger postures in the rear seats of automobiles.
Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J
2016-06-01
Few statistical models of rear seat passenger posture have been published, and none has taken into account the effects of occupant age. This study developed new statistical models for predicting passenger postures in the rear seats of automobiles. Postures of 89 adults with a wide range of age and body size were measured in a laboratory mock-up in seven seat configurations. Posture-prediction models for female and male passengers were separately developed by stepwise regression using age, body dimensions, seat configurations and two-way interactions as potential predictors. Passenger posture was significantly associated with age and the effects of other two-way interaction variables depended on age. A set of posture-prediction models are presented for women and men, and the prediction results are compared with previously published models. This study is the first study of passenger posture to include a large cohort of older passengers and the first to report a significant effect of age for adults. The presented models can be used to position computational and physical human models for vehicle design and assessment. Practitioner Summary: The significant effects of age, body dimensions and seat configuration on rear seat passenger posture were identified. The models can be used to accurately position computational human models or crash test dummies for older passengers in known rear seat configurations. PMID:26328769
Markovits, Henry; Benenson, Joyce F; Kramer, Donald L
2003-01-01
This study examined internal representations of food sharing in 589 children and adolescents (8-19 years of age). Questionnaires, depicting a variety of contexts in which one person was asked to share a resource with another, were used to examine participants' expectations of food-sharing behavior. Factors that were varied included the value of the resource, the relation between the two depicted actors, the quality of this relation, and gender. Results indicate that internal models of food-sharing behavior showed systematic patterns of variation, demonstrating that individuals have complex contextually based internal models at all ages, including the youngest. Examination of developmental changes in use of individual patterns is consistent with the idea that internal models reflect age-specific patterns of interactions while undergoing a process of progressive consolidation. PMID:14669890
NASA Technical Reports Server (NTRS)
Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris
2000-01-01
The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).
Evaluating Modeled Variables Included in the NOAA Water Vapor Flux Tool
NASA Astrophysics Data System (ADS)
Darby, L. S.; White, A. B.; Coleman, T.
2015-12-01
The NOAA/ESRL/Physical Sciences Division has a Water Vapor Flux Tool showing observed and forecast meteorological variables related to heavy precipitation. Details about this tool will be presented in a companion paper by White et al. (2015, this conference). We evaluate 3-hr precipitation forecasts from four models (the HRRR, HRRRexp, RAP, and RAPexp) that were added to the tool in Dec. 2014. The Rapid Refresh (RAP) and the High-Resolution Rapid Refresh (HRRR) models are run operationally by NOAA, are initialized hourly, and produce forecasts out to 15 hours. The RAP and HRRR have experimental versions (RAPexp and HRRRexp, respectively) that are run near-real time at the NOAA/ESRL/Global Systems Division. Our analysis of eight rain days includes atmospheric river events in Dec. 2014 and Feb. 2015. We evaluate the forecasts using observations at two sites near the California coast - Bodega Bay (BBY, 15 m ASL) and Cazadero (CZC, 478 m ASL), and an inland site near Colfax, CA (CFC, 643 m ASL). Various criteria were used to evaluate the forecasts. (1) The Pielke criteria: we compare the RMSE and unbiased RMSE of the model output to the standard deviation of the observations, and we compare the standard deviation of the model output to the standard deviation of the observations; (2) we compare the modeled 24-hr precipitation to the observed 24-hr precipitation; and (3) we assess the correlation coefficient between the modeled and observed precipitation. Based on these criteria, the RAP slightly outperformed the other models. Only the RAP and the HRRRexp had forecasts that met the Pielke criteria. All of the models were able to predict the observed 24-hour precipitation, within 10%, in only 8-16% of their forecasts. All models achieved a correlation coefficient value above the 90th percentile in 12.5% of their forecasts. The station most likely to have a forecast that met any of the criteria was the inland mountain station CFC; the least likely was the coastal mountain
MEMLS3&a: Microwave Emission Model of Layered Snowpacks adapted to include backscattering
NASA Astrophysics Data System (ADS)
Proksch, M.; Mätzler, C.; Wiesmann, A.; Lemmetyinen, J.; Schwank, M.; Löwe, H.; Schneebeli, M.
2015-08-01
The Microwave Emission Model of Layered Snowpacks (MEMLS) was originally developed for microwave emissions of snowpacks in the frequency range 5-100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like- and cross-polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS) is set up in a way that snow input parameters can be derived by objective measurement methods which avoid fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx (Nordic Snow Radar Experiment) campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in Matlab and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.
Diehl, S; Zambrano, J; Carlsson, B
2016-01-01
A reduced model of a completely stirred-tank bioreactor coupled to a settling tank with recycle is analyzed in its steady states. In the reactor, the concentrations of one dominant particulate biomass and one soluble substrate component are modelled. While the biomass decay rate is assumed to be constant, growth kinetics can depend on both substrate and biomass concentrations, and optionally model substrate inhibition. Compressive and hindered settling phenomena are included using the Bürger-Diehl settler model, which consists of a partial differential equation. Steady-state solutions of this partial differential equation are obtained from an ordinary differential equation, making steady-state analysis of the entire plant difficult. A key result showing that the ordinary differential equation can be replaced with an approximate algebraic equation simplifies model analysis. This algebraic equation takes the location of the sludge-blanket during normal operation into account, allowing for the limiting flux capacity caused by compressive settling to easily be included in the steady-state mass balance equations for the entire plant system. This novel approach grants the possibility of more realistic solutions than other previously published reduced models, comprised of yet simpler settler assumptions. The steady-state concentrations, solids residence time, and the wastage flow ratio are functions of the recycle ratio. Solutions are shown for various growth kinetics; with different values of biomass decay rate, influent volumetric flow, and substrate concentration. PMID:26476681
MEMLS3&a: Microwave Emission Model of Layered Snowpacks adapted to include backscattering
NASA Astrophysics Data System (ADS)
Proksch, M.; Mätzler, C.; Wiesmann, A.; Lemmetyinen, J.; Schwank, M.; Löwe, H.; Schneebeli, M.
2015-03-01
The Microwave Emission Model of Layered Snowpacks (MEMLS) was originally developed for microwave emissions of snowpacks in the frequency range 5-100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like and cross polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS) is set up in a way that snow input parameters can be derived by objective measurement methods which avoids fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in MATLAB and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.
The Effects of Including Non-Thermal Particles in Flare Loop Models
NASA Astrophysics Data System (ADS)
Reeves, K. K.; Winter, H. D.; Larson, N. L.
2012-05-01
In this work, we use HyLoop (Winter et al. 2011), a loop model that can incorporate the effects of both MHD and non-thermal particle populations, to simulate soft X-ray emissions in various situations. First of all, we test the effect of acceleration location on the emission in several XRT filters by simulating a series of post flare loops with different injection points for the non-thermal particle beams. We use an injection distribution peaked at the loop apex to represent a direct acceleration model, and an injection distribution peaked at the footpoints to represent the Alfvén wave interaction model. We find that footpoint injection leads to several early peaks in the apex-to-footpoint emission ratio. Second, we model a loop with cusp-shaped geometry based on the eruption model developed byLin & Forbes (2000) and Reeves & Forbes (2005a), and find that early in the flare, emission in the loop footpoints is much brighter in the XRT filters if non-thermal particles are included in the calculation. Finally, we employ a multi-loop flare model to simulate thermal emission and compare with a previous model where a semi-circular geometry was used (Reeves et al. 2007). We compare the Geostationary Operational Environmental Satellite (GOES) emission from the two models and find that the cusp-shaped geometry leads to a smaller GOES class flare.
Safe distance car-following model including backward-looking and its stability analysis
NASA Astrophysics Data System (ADS)
Yang, Da; Jin, Peter Jing; Pu, Yun; Ran, Bin
2013-03-01
The focus of this paper is the car-following behavior including backward-looking, simply called the bi-directional looking car-following behavior. This study is motivated by the potential changes of the physical properties of traffic flow caused by the fast developing intelligent transportation system (ITS), especially the new connected vehicle technology. Existing studies on this topic focused on general motors (GM) models and optimal velocity (OV) models. The safe distance car-following model, Gipps' model, which is more widely used in practice have not drawn too much attention in the bi-directional looking context. This paper explores the property of the bi-directional looking extension of Gipps' safe distance model. The stability condition of the proposed model is derived using the linear stability theory and is verified using numerical simulations. The impacts of the driver and vehicle characteristics appeared in the proposed model on the traffic flow stability are also investigated. It is found that taking into account the backward-looking effect in car-following has three types of effect on traffic flow: stabilizing, destabilizing and producing non-physical phenomenon. This conclusion is more sophisticated than the study results based on the OV bi-directional looking car-following models. Moreover, the drivers who have the smaller reaction time or the larger additional delay and think the other vehicles have larger maximum decelerations can stabilize traffic flow.
Taber, L A; Shi, Y; Yang, L; Bayly, P V
2011-01-01
Much is known about the biophysical mechanisms involved in cell crawling, but how these processes are coordinated to produce directed motion is not well understood. Here, we propose a new hypothesis whereby local cytoskeletal contraction generates fluid flow through the lamellipodium, with the pressure at the front of the cell facilitating actin polymerization which pushes the leading edge forward. The contraction, in turn, is regulated by stress in the cytoskeleton. To test this hypothesis, finite element models for a crawling cell are presented. These models are based on nonlinear poroelasticity theory, modified to include the effects of active contraction and growth, which are regulated by mechanical feedback laws. Results from the models agree reasonably well with published experimental data for cell speed, actin flow, and cytoskeletal deformation in migrating fish epidermal keratocytes. The models also suggest that oscillations can occur for certain ranges of parameter values. PMID:21765817
A consistent model of electroweak data including Z → b overlineb and Z → c overlinec
NASA Astrophysics Data System (ADS)
Agashe, K.; Graesser, M.; Hinchliffe, I.; Suzuki, M.
1996-02-01
We have performed an overall fit to the electroweak data with the generation blind U(1) extension of the Standard Model. As input data for fitting we have included the asymmetry parameters, the partial decay widths of Z, neutrino scattering, and atomic parity violation. The QCD coupling αs has been constrained to the world average obtained from all data except the Z width. On the basis of our fit we have constructed a viable gauge model that not only explains Rb and Rc but also provides a much better overall fit to the data than the Standard Model. Despite its phenomenological viability, our model is unfortunately not simple from the theoretical viewpoint. Atomic parity violation experiments strongly disfavor more aesthetically appealing alternatives that can be grand unified.
Multifluid Simulations of the Global Solar Wind Including Pickup Ions and Turbulence Modeling
NASA Technical Reports Server (NTRS)
Goldstein, Melvyn L.; Usmanov, A. V.
2011-01-01
I will describe a three-dimensional magnetohydrodynamic model of the solar wind that takes into account turbulent heating of the wind by velocity and magnetic fluctuations as well as a variety of effects produced by interstellar pickup protons. The interstellar pickup protons are treated in the model as one fluid and the protons and electrons are treated together as a second fluid. The model equations include a Reynolds decomposition of the plasma velocity and magnetic field into mean and fluctuating quantities, as well as energy transfer from interstellar pickup protons to solar wind protons that results in the deceleration of the solar wind. The model is used to simulate the global steady-state structure of the solar wind in the region from 0.3 to 100 AU. The simulation assumes that the background magnetic field on the Sun is either a dipole (aligned or tilted with respect to the solar rotation axis) or one that is deduced from solar magnetograms.
Global Reference Atmospheric Models, Including Thermospheres, for Mars, Venus and Earth
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, C. G.; Keller, Vernon W.
2006-01-01
This document is the viewgraph slides of the presentation. Marshall Space Flight Center's Natural Environments Branch has developed Global Reference Atmospheric Models (GRAMs) for Mars, Venus, Earth, and other solar system destinations. Mars-GRAM has been widely used for engineering applications including systems design, performance analysis, and operations planning for aerobraking, entry descent and landing, and aerocapture. Preliminary results are presented, comparing Mars-GRAM with measurements from Mars Reconnaissance Orbiter (MRO) during its aerobraking in Mars thermosphere. Venus-GRAM is based on the Committee on Space Research (COSPAR) Venus International Reference Atmosphere (VIRA), and is suitable for similar engineering applications in the thermosphere or other altitude regions of the atmosphere of Venus. Until recently, the thermosphere in Earth-GRAM has been represented by the Marshall Engineering Thermosphere (MET) model. Earth-GRAM has recently been revised. In addition to including an updated version of MET, it now includes an option to use the Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar Extended Model (NRLMSISE-00) as an alternate thermospheric model. Some characteristics and results from Venus-GRAM and Earth-GRAM thermospheres are also presented.
NASA Astrophysics Data System (ADS)
Cotroneo, Vincenzo; Davis, William N.; Reid, Paul B.; Schwartz, Daniel A.; Trolier-McKinstry, Susan; Wilke, Rudeger H. T.
2011-09-01
The present generation of X-ray telescopes emphasizes either high image quality (e.g. Chandra with sub-arc second resolution) or large effective area (e.g. XMM-Newton), while future observatories under consideration (e.g. Athena, AXSIO) aim to greatly enhance the effective area, while maintaining moderate (~10 arc-seconds) image quality. To go beyond the limits of present and planned missions, the use of thin adjustable optics for the control of low-order figure error is needed to obtain the high image quality of precisely figured mirrors along with the large effective area of thin mirrors. The adjustable mirror prototypes under study at Smithsonian Astrophysical Observatory are based on two different principles and designs: 1) thin film lead-zirconate-titanate (PZT) piezoelectric actuators directly deposited on the mirror back surface, with the strain direction parallel to the glass surface (for sub-arc-second angular resolution and large effective area), and 2) conventional leadmagnesium- niobate (PMN) electrostrictive actuators with their strain direction perpendicular to the mirror surface (for 3-5 arc second resolution and moderate effective area). We have built and operated flat test mirrors of these adjustable optics. We present the comparison between theoretical influence functions as obtained by finite element analysis and the measured influence functions obtained from the two test configurations.
Henkel, Marius; Schmidberger, Anke; Vogelbacher, Markus; Kühnert, Christian; Beuker, Janina; Bernard, Thomas; Schwartz, Thomas; Syldatk, Christoph; Hausmann, Rudolf
2014-08-01
The production of rhamnolipid biosurfactants by Pseudomonas aeruginosa is under complex control of a quorum sensing-dependent regulatory network. Due to a lack of understanding of the kinetics applicable to the process and relevant interrelations of variables, current processes for rhamnolipid production are based on heuristic approaches. To systematically establish a knowledge-based process for rhamnolipid production, a deeper understanding of the time-course and coupling of process variables is required. By combining reaction kinetics, stoichiometry, and experimental data, a process model for rhamnolipid production with P. aeruginosa PAO1 on sunflower oil was developed as a system of coupled ordinary differential equations (ODEs). In addition, cell density-based quorum sensing dynamics were included in the model. The model comprises a total of 36 parameters, 14 of which are yield coefficients and 7 of which are substrate affinity and inhibition constants. Of all 36 parameters, 30 were derived from dedicated experimental results, literature, and databases and 6 of them were used as fitting parameters. The model is able to describe data on biomass growth, substrates, and products obtained from a reference batch process and other validation scenarios. The model presented describes the time-course and interrelation of biomass, relevant substrates, and products on a process level while including a kinetic representation of cell density-dependent regulatory mechanisms. PMID:24770383
Modeling tether-ballast asteroid diversion systems, including tether mass and elasticity
NASA Astrophysics Data System (ADS)
French, David B.; Mazzoleni, Andre P.
2014-10-01
The risk of an impact between a large asteroid and the Earth has been significant enough to attract the attention of many researchers. This paper focuses on a mitigation technique that involves the use of a long tether and ballast mass to divert an asteroid. When such a tether is modeled as massless and inelastic, results show that the method may be viable for diverting asteroids away from a collision with the Earth; the next step towards demonstrating the viability of the approach is to conduct a study which uses a more realistic tether model. This paper presents such a study, in which the tether models include tether mass and elasticity. These models verify that a tether-ballast system is capable of diverting Earth-threatening asteroids. Detailed parametric studies are presented which illustrate how system performance depends on tether mass and elasticity. Also, case studies are presented which show how more realistic models can aid in the design of tether-ballast asteroid mitigation systems. Key findings include the dangers imposed by periods during which the tether goes slack and ways to preclude this.
A structural model for the in vivo human cornea including collagen-swelling interaction.
Cheng, Xi; Petsche, Steven J; Pinsky, Peter M
2015-08-01
A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. PMID:26156299
Buckley, Lauren B; Waaser, Stephanie A; MacLean, Heidi J; Fox, Richard
2011-12-01
Thermal constraints on development are often invoked to predict insect distributions. These constraints tend to be characterized in species distribution models (SDMs) by calculating development time based on a constant lower development temperature (LDT). Here, we assessed whether species-specific estimates of LDT based on laboratory experiments can improve the ability of SDMs to predict the distribution shifts of six U.K. butterflies in response to recent climate warming. We find that species-specific and constant (5 degrees C) LDT degree-day models perform similarly at predicting distributions during the period of 1970-1982. However, when the models for the 1970-1982 period are projected to predict distributions in 1995-1999 and 2000-2004, species-specific LDT degree-day models modestly outperform constant LDT degree-day models. Our results suggest that, while including species-specific physiology in correlative models may enhance predictions of species' distribution responses to climate change, more detailed models may be needed to adequately account for interspecific physiological differences. PMID:22352161
A structural model for the in vivo human cornea including collagen-swelling interaction
Cheng, Xi; Petsche, Steven J.; Pinsky, Peter M.
2015-01-01
A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. PMID:26156299
Modeling of single char combustion, including CO oxidation in its boundary layer
Lee, C.H.; Longwell, J.P.; Sarofim, A.F.
1994-10-25
The combustion of a char particle can be divided into a transient phase where its temperature increases as it is heated by oxidation, and heat transfer from the surrounding gas to an approximately constant temperature stage where gas phase reaction is important and which consumes most of the carbon and an extinction stage caused by carbon burnout. In this work, separate models were developed for the transient heating where gas phase reactions were unimportant and for the steady temperature stage where gas phase reactions were treated in detail. The transient char combustion model incorporates intrinsic char surface production of CO and CO{sub 2}, internal pore diffusion and external mass and heat transfer. The model provides useful information for particle ignition, burning temperature profile, combustion time, and carbon consumption rate. A gas phase reaction model incorporating the full set of 28 elementary C/H/O reactions was developed. This model calculated the gas phase CO oxidation reaction in the boundary layer at particle temperatures of 1250 K and 2500 K by using the carbon consumption rate and the burning temperature at the pseudo-steady state calculated from the temperature profile model but the transient heating was not included. This gas phase model can predict the gas species, and the temperature distributions in the boundary layer, the CO{sub 2}/CO ratio, and the location of CO oxidation. A mechanistic heat and mass transfer model was added to the temperature profile model to predict combustion behavior in a fluidized bed. These models were applied to data from the fluidized combustion of Newlands coal char particles. 52 refs., 60 figs.
Modifying a telerobotic system to include robotic operation by means of dynamic modeling
Corbett, G.K.; Jansen, J.F.; Kress, R.L.; Noakes, M.W.
1989-01-01
The goal of this study was to implement a robotic mode for the Advanced Servomanipulator (ASM), a six-degree-of-freedom master/slave teleoperator. To implement a robotic mode on a system designed for teleoperation, the effects of any change in the control schemes must be completely understood. One way to study the impact of potential modifications is to develop a model of the system. This approach is the one taken in this study. A detailed full-arm model was developed by first creating a model for individual joints of the manipulator and then incorporating each of the joint models into a single full-arm model, including link inertias and kinematic cross-coupling. Parameters were identified for each joint model to provide a match between simulated and actual responses to a pulse input. The full-arm model was tested by comparing the simulated and actual response of the ASM to simultaneous sine-wave inputs to each joint, using the model parameters identified on a joint-by-joint basis. The full-arm model was able to characterize effectively the ASM system response for the inputs studied. Robotic-mode control algorithms were tested on both the individual-joint and full-arm models. The results of these simulations indicate that a simplified master/slave control structure is the best candidate for robotic operation. This control structure was added to the ASM. Experimental results demonstrate that the ASM system is capable of repeatable robotic operation. The robotic-mode man-machine interface and data handling system are described in this paper. 12 refs., 3 figs.
Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.
Laminated core modeling under rotational excitations including eddy currents and hysteresis
Bottauscio, Oriano; Chiampi, Mario
2001-06-01
This article presents a numerical model for the electromagnetic analysis of hysteretic-laminated cores under rotational excitations. The computational approach is based on the finite element solution of a two-dimensional field magnetic problem in a homogeneous structure, where the skin effect due to macroscopic eddy currents in the lamination depth is included through a generalized dynamic constitutive relationship between B and H. The proposed model, after validation, is applied to the analysis of a laminated disk, evaluating the effects of the supply frequency and distorsion on power losses and B{endash}H loops. {copyright} 2001 American Institute of Physics.
Shapiro, Y; Moran, D; Epstein, Y; Stroschein, L; Pandolf, K B
1995-05-01
Under outdoor conditions this model was over estimating sweat loss response in shaded (low solar radiation) environments, and underestimating the response when solar radiation was high (open field areas). The present study was conducted in order to adjust the model to be applicable under outdoor environmental conditions. Four groups of fit acclimated subjects participated in the study. They were exposed to three climatic conditions (30 degrees, 65% rh; 31 degrees C, 40% rh; and 40 degrees C, 20% rh) and three levels of metabolic rate (100, 300 and 450 W) in shaded and sunny areas while wearing shorts, cotton fatigues (BDUs) or protective garments. The original predictive equation for sweat loss was adjusted for the outdoor conditions by evaluating separately the radiative heat exchange, short-wave absorption in the body and long-wave emission from the body to the atmosphere and integrating them in the required evaporation component (Ereq) of the model, as follows: Hr = 1.5SL0.6/I(T) (watt) H1 = 0.047Me.th/I(T) (watt), where SL is solar radiation (W.m-2), Me.th is the Stephan Boltzman constant, and I(T) is the effective clothing insulation coefficient. This adjustment revealed a high correlation between the measured and expected values of sweat loss (r = 0.99, p < 0.0001). PMID:7737107
Bongers, Mathilda L; de Ruysscher, Dirk; Oberije, Cary; Lambin, Philippe; Uyl-de Groot, Carin A; Coupé, V M H
2016-01-01
With the shift toward individualized treatment, cost-effectiveness models need to incorporate patient and tumor characteristics that may be relevant to treatment planning. In this study, we used multistate statistical modeling to inform a microsimulation model for cost-effectiveness analysis of individualized radiotherapy in lung cancer. The model tracks clinical events over time and takes patient and tumor features into account. Four clinical states were included in the model: alive without progression, local recurrence, metastasis, and death. Individual patients were simulated by repeatedly sampling a patient profile, consisting of patient and tumor characteristics. The transitioning of patients between the health states is governed by personalized time-dependent hazard rates, which were obtained from multistate statistical modeling (MSSM). The model simulations for both the individualized and conventional radiotherapy strategies demonstrated internal and external validity. Therefore, MSSM is a useful technique for obtaining the correlated individualized transition rates that are required for the quantification of a microsimulation model. Moreover, we have used the hazard ratios, their 95% confidence intervals, and their covariance to quantify the parameter uncertainty of the model in a correlated way. The obtained model will be used to evaluate the cost-effectiveness of individualized radiotherapy treatment planning, including the uncertainty of input parameters. We discuss the model-building process and the strengths and weaknesses of using MSSM in a microsimulation model for individualized radiotherapy in lung cancer. PMID:25732723
NASA Astrophysics Data System (ADS)
Gasheva, L. M.; Kalinkova, G.; Minkov, E.; Krestev, V.
1984-03-01
Employing IR spectroscopy some technological models of amoxicillin trihydrate, included in ethyl-, methyl-, carboxymethyl- and methylhydroxyethyl-cellulose have been studied. Interactions were established only between amoxicillin trihydrate and ethylcellulose. The IR absorption spectra suggest a H-bonded antibiotic with hydroxyl groups in the ethylcellulose molecule. The IR spectral differences observed are not due to polymorphic transformation; this was proved by X-ray powder diffraction.
A full model for simulation of electrochemical cells including complex behavior
NASA Astrophysics Data System (ADS)
Esperilla, J. J.; Félez, J.; Romero, G.; Carretero, A.
This communication presents a model of electrochemical cells developed in order to simulate their electrical, chemical and thermal behavior showing the differences when thermal effects are or not considered in the charge-discharge process. The work presented here has been applied to the particular case of the Pb,PbSO 4|H 2SO 4 (aq)|PbO 2,Pb cell, which forms the basis of the lead-acid batteries so widely used in the automotive industry and as traction batteries in electric or hybrid vehicles. Each half-cell is considered independently in the model. For each half-cell, in addition to the main electrode reaction, a secondary reaction is considered: the hydrogen evolution reaction in the negative electrode and the oxygen evolution reaction in the positive. The equilibrium potential is calculated with the Nernst equation, in which the activity coefficients are fitted to an exponential function using experimental data. On the other hand, the two main mechanisms that produce the overpotential are considered, that is the activation or charge transfer and the diffusion mechanisms. First, an isothermal model has been studied in order to show the behavior of the main phenomena. A more complex model has also been studied including thermal behavior. This model is very useful in the case of traction batteries in electric and hybrid vehicles where high current intensities appear. Some simulation results are also presented in order to show the accuracy of the proposed models.
Producing high-accuracy lattice models from protein atomic coordinates including side chains.
Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M
2012-01-01
Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models. PMID:22934109
Producing High-Accuracy Lattice Models from Protein Atomic Coordinates Including Side Chains
Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M.
2012-01-01
Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models. PMID:22934109
RELAP5-3D Code Includes Athena Features and Models
Richard A. Riemke; Cliff B. Davis; Richard R. Schultz
2006-07-01
Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.
RELAP5-3D Code Includes ATHENA Features and Models
Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.
2006-07-01
Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF{sub 6}, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)
Double-gate junctionless transistor model including short-channel effects
NASA Astrophysics Data System (ADS)
Paz, B. C.; Ávila-Herrera, F.; Cerdeira, A.; Pavanello, M. A.
2015-05-01
This work presents a physically based model for double-gate junctionless transistors (JLTs), continuous in all operation regimes. To describe short-channel transistors, short-channel effects (SCEs), such as increase of the channel potential due to drain bias, carrier velocity saturation and mobility degradation due to vertical and longitudinal electric fields, are included in a previous model developed for long-channel double-gate JLTs. To validate the model, an analysis is made by using three-dimensional numerical simulations performed in a Sentaurus Device Simulator from Synopsys. Different doping concentrations, channel widths and channel lengths are considered in this work. Besides that, the series resistance influence is numerically included and validated for a wide range of source and drain extensions. In order to check if the SCEs are appropriately described, besides drain current, transconductance and output conductance characteristics, the following parameters are analyzed to demonstrate the good agreement between model and simulation and the SCEs occurrence in this technology: threshold voltage (VTH), subthreshold slope (S) and drain induced barrier lowering.
Model of accelerating voltage pulse in DARHT-2 including Metglas saturation
NASA Astrophysics Data System (ADS)
Genoni, Thomas; Hughes, Thomas; Thoma, Carsten
2003-10-01
The DARHT-2 facility (Los Alamos National Laboratory) accelerates a 2 microsecond electron beam using a series of inductive accelerating cells. The cell inductance is provided by large Metglas cores, which are driven by a pulse-forming network (PFN). We have developed a model for this circuit which includes the nonlinear and spatially varying behavior of the Metglas. Data from experiments in which a capacitor was discharged through a Metglas core are used to develop a hysteresis model, based on the Hodgdon [Ref.1] theory of ferromagnetic materials. The resulting model is used in calculations of the output of the DARHT PFN, and comparison is made to experiments in which the PFN was terminated in a dummy resistive load. 1. M. L. Hodgdon, "Mathematical Theory and Calculations of Magnetic Hysteresis Curves", IEEE Trans. Magn., v. MAG-24, n. 6, pp. 3120-2, Nov. 1988.
SPheno 3.1: extensions including flavour, CP-phases and models beyond the MSSM
NASA Astrophysics Data System (ADS)
Porod, W.; Staub, F.
2012-11-01
We describe recent extensions of the program SPhenoincluding flavour aspects, CP-phases, R-parity violation and low energy observables. In case of flavour mixing all masses of supersymmetric particles are calculated including the complete flavour structure and all possible CP-phases at the 1-loop level. We give details on implemented seesaw models, low energy observables and the corresponding extension of the SUSY Les Houches Accord. Moreover, we comment on the possibilities to include MSSM extensions in SPheno. Catalogue identifier: ADRV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRV_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154062 No. of bytes in distributed program, including test data, etc.: 1336037 Distribution format: tar.gz Programming language: Fortran95. Computer: PC running under Linux, should run in every Unix environment. Operating system: Linux, Unix. Classification: 11.6. Catalogue identifier of previous version: ADRV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 153(2003)275 Does the new version supersede the previous version?: Yes Nature of problem: The first issue is the determination of the masses and couplings of supersymmetric particles in various supersymmetric models, the R-parity conserved MSSM with generation mixing and including CP-violating phases, various seesaw extensions of the MSSM and the MSSM with bilinear R-parity breaking. Low energy data on Standard Model fermion masses, gauge couplings and electroweak gauge boson masses serve as constraints. Radiative corrections from supersymmetric particles to these inputs must be calculated. Theoretical constraints on the soft SUSY breaking parameters from a high scale theory are imposed and the parameters at the electroweak scale are obtained from the
Dimer linkage structure in retroviruses: models that include both duplex and quadruplex domains.
Zarudnaya, M I; Kolomiets, I M; Potyahaylo, A L; Hovorun, D M
2005-01-01
Genome of all known retroviruses consists of two identical molecules of RNA, which are non-covalently linked. The most stable contact site between two RNA molecules is located near their 5' ends. The molecular interactions in the dimer linkage structure (DLS) in mature virions are currently unknown. Recently we suggested that the dimer linkage structure in human immunodeficiency virus 1 (HIV-1) contains both duplex and quadruplex domains and proposed a model of DLS in HIV-1Mal (Central African virus). In this paper we showed that similar models can be also built for HIV- 1Lai, a representative of the North-American and European viruses. One of the double-stranded domains in the model structures represents either an extended duplex formed by different pathways (through base pair melting and subsequent reannealing or by a recombination mechanism) or kissing loop complex. The quadruplexes contain both G- and mixed tetrads, for example, G.C.G.C or A.U.A.U. Phylogenetic analysis of 350 isolates from NCBI database showed that similar models of DLS are predictable practically for all HIV-1 isolates surveyed. A model of dimer linkage structure in Moloney murine sarcoma virus (MuSV) is also presented. The structure includes a duplex formed by the palindromic sequences and several quadruplexes. PMID:16335231
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Douglass, A. R.
1986-01-01
Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.
Multiple tail models including inverse measures for structural design under uncertainties
NASA Astrophysics Data System (ADS)
Ramu, Palaniappan
Sampling-based reliability estimation with expensive computer models may be computationally prohibitive due to a large number of required simulations. One way to alleviate the computational expense is to extrapolate reliability estimates from observed levels to unobserved levels. Classical tail modeling techniques provide a class of models to enable this extrapolation using asymptotic theory by approximating the tail region of the cumulative distribution function (CDF). This work proposes three alternate tail extrapolation techniques including inverse measures that can complement classical tail modeling. The proposed approach, multiple tail models, applies the two classical and three alternate extrapolation techniques simultaneously to estimate inverse measures at the extrapolation regions and use the median as the best estimate. It is observed that the range of the five estimates can be used as a good approximation of the error associated with the median estimate. Accuracy and computational efficiency are competing factors in selecting sample size. Yet, as our numerical studies reveal, the accuracy lost to the reduction of computational power is very small in the proposed method. The method is demonstrated on standard statistical distributions and complex engineering examples.
Venetsanos, A G; Bartzis, J G; Würtz, J; Papailiou, D D
2003-04-25
A two-dimensional shallow layer model has been developed to predict dense gas dispersion, under realistic conditions, including complex features such as two-phase releases, obstacles and inclined ground. The model attempts to predict the time and space evolution of the cloud formed after a release of a two-phase pollutant into the atmosphere. The air-pollutant mixture is assumed ideal. The cloud evolution is described mathematically through the Cartesian, two-dimensional, shallow layer conservation equations for mixture mass, mixture momentum in two horizontal directions, total pollutant mass fraction (vapor and liquid) and mixture internal energy. Liquid mass fraction is obtained assuming phase equilibrium. Account is taken in the conservation equations for liquid slip and eventual liquid rainout through the ground. Entrainment of ambient air is modeled via an entrainment velocity model, which takes into account the effects of ground friction, ground heat transfer and relative motion between cloud and surrounding atmosphere. The model additionally accounts for thin obstacles effects in three ways. First a stepwise description of the obstacle is generated, following the grid cell faces, taking into account the corresponding area blockage. Then obstacle drag on the passing cloud is modeled by adding flow resistance terms in the momentum equations. Finally the effect of extra vorticity generation and entrainment enhancement behind obstacles is modeled by adding locally into the entrainment formula without obstacles, a characteristic velocity scale defined from the obstacle pressure drop and the local cloud height.The present model predictions have been compared against theoretical results for constant volume and constant flux gravity currents. It was found that deviations of the predicted cloud footprint area change with time from the theoretical were acceptably small, if one models the frictional forces between cloud and ambient air, neglecting the Richardson
American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Rosenbaum ...
An extended gene protein/products boolean network model including post-transcriptional regulation
2014-01-01
Background Networks Biology allows the study of complex interactions between biological systems using formal, well structured, and computationally friendly models. Several different network models can be created, depending on the type of interactions that need to be investigated. Gene Regulatory Networks (GRN) are an effective model commonly used to study the complex regulatory mechanisms of a cell. Unfortunately, given their intrinsic complexity and non discrete nature, the computational study of realistic-sized complex GRNs requires some abstractions. Boolean Networks (BNs), for example, are a reliable model that can be used to represent networks where the possible state of a node is a boolean value (0 or 1). Despite this strong simplification, BNs have been used to study both structural and dynamic properties of real as well as randomly generated GRNs. Results In this paper we show how it is possible to include the post-transcriptional regulation mechanism (a key process mediated by small non-coding RNA molecules like the miRNAs) into the BN model of a GRN. The enhanced BN model is implemented in a software toolkit (EBNT) that allows to analyze boolean GRNs from both a structural and a dynamic point of view. The open-source toolkit is compatible with available visualization tools like Cytoscape and allows to run detailed analysis of the network topology as well as of its attractors, trajectories, and state-space. In the paper, a small GRN built around the mTOR gene is used to demonstrate the main capabilities of the toolkit. Conclusions The extended model proposed in this paper opens new opportunities in the study of gene regulation. Several of the successful researches done with the support of BN to understand high-level characteristics of regulatory networks, can now be improved to better understand the role of post-transcriptional regulation for example as a network-wide noise-reduction or stabilization mechanisms. PMID:25080304
Boullata, Joseph I; Holcombe, Beverly; Sacks, Gordon; Gervasio, Jane; Adams, Stephen C; Christensen, Michael; Durfee, Sharon; Ayers, Phil; Marshall, Neil; Guenter, Peggi
2016-08-01
Parenteral nutrition (PN) is a high-alert medication with a complex drug use process. Key steps in the process include the review of each PN prescription followed by the preparation of the formulation. The preparation step includes compounding the PN or activating a standardized commercially available PN product. The verification and review, as well as preparation of this complex therapy, require competency that may be determined by using a standardized process for pharmacists and for pharmacy technicians involved with PN. An American Society for Parenteral and Enteral Nutrition (ASPEN) standardized model for PN order review and PN preparation competencies is proposed based on a competency framework, the ASPEN-published interdisciplinary core competencies, safe practice recommendations, and clinical guidelines, and is intended for institutions and agencies to use with their staff. PMID:27317615
NASA Technical Reports Server (NTRS)
Free, April M.; Flowers, George T.; Trent, Victor S.
1993-01-01
Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotor-dynamic model and assess the dynamic behavior of a magnetic bearing rotor system which includes the effects of auxiliary bearings. Of particular interest is the effects of introducing sideloading into such a system during failure of the magnetic bearing. A model is developed from an experimental test facility and a number of simulation studies are performed. These results are presented and discussed.
A model for Huanglongbing spread between citrus plants including delay times and human intervention
NASA Astrophysics Data System (ADS)
Vilamiu, Raphael G. d'A.; Ternes, Sonia; Braga, Guilherme A.; Laranjeira, Francisco F.
2012-09-01
The objective of this work was to present a compartmental deterministic mathematical model for representing the dynamics of HLB disease in a citrus orchard, including delay in the disease's incubation phase in the plants, and a delay period on the nymphal stage of Diaphorina citri, the most important HLB insect vector in Brazil. Numerical simulations were performed to assess the possible impacts of human detection efficiency of symptomatic plants, as well as the influence of a long incubation period of HLB in the plant.
NASA Astrophysics Data System (ADS)
Printsypar, G.; Iliev, O.; Rief, S.
2011-12-01
Paper production is a challenging problem which attracts attention of many scientists. The process which is of our interest takes place in the pressing section of a paper machine. The paper layer is dried by means of the pressing it against fabrics, i.e. press felts. The paper-felt sandwich is transported through the press nips at high speed (for more details see [3]). Since the natural drainage of water in the felts is much longer than the drying in the pressing section we include in the consideration the dynamic capillary effect. The dynamic capillary pressure-saturation relation proposed by Hassanizadeh and Gray (see [2]) is adopted for the pressing process. One of the other issues which is taken into account while modeling the pressing section is the appearance of fully saturated regions. We include in consideration two flow regimes: the one-phase water flow and the two-phase air-water flow. It leads to a free boundary problem. We also account for the complexity of the paper-felt sandwich porous structure. Apart from the two flow regimes the computational domain is divided by layers into nonoverlapping subdomains. Then, the system of equations describing transport processes in the pressing section is stated taking into account all these features. The presented model is discretized by the finite volume method. We carry out some numerical experiments for different configurations of the pressing section (roll press, shoe press) and for parameters which are typical for paper-felt sandwich during the paper production process. The experiments show that the dynamic capillary effect has a significant influence on the distribution of pressure even for small values of the material coefficient (see Fig. 1). The obtained results are in agreement with laboratory experiment performed in [1], which states that the distribution of the pressure is not symmetric with the maximum value occurring in front of the center of the pressing nip and the minimum value less than entry
DEVELOPMENT OF A PRODUCT MODEL FOR CUT-AND-COVER TUNNELS INCLUDING DEGRADATIONS
NASA Astrophysics Data System (ADS)
Aruga, Takashi; Yabuki, Nobuyoshi; Arai, Yasushi
Cut-and-Cover tunnels are constructed on site. The various conditions of environments and techniques of construction make a significant influence on the quality of the tunnel. It is extremely difficult to rebuild the tunnel even if a structural trouble is found once the construction is completed. Thus, suitable maintenance is needed to ensure the tunnel is in a healthy condition. To execute better maintenance, the information on design and construction of the tunnel is vital for inspection of degradation, estimation of occurrence factors and planning of repair or refurbishing works. In this paper, we developed a product model for representing cut-and-cover tunnels including degradations for effective information use in maintenance work. As its first step, we investigated the characteristics of cut-and-cover tunnels and degradations about reinforced concrete members and developed a conceptual model. Then, we implemented the conceptual product model by expanding Industry Foundation Classes (IFC). Finally, we verified the product model by applying it to a simple tunnel.
A multiscale model for glioma spread including cell-tissue interactions and proliferation.
Engwer, Christian; Knappitsch, Markus; Surulescu, Christina
2016-04-01
Glioma is a broad class of brain and spinal cord tumors arising from glia cells, which are the main brain cells that can develop into neoplasms. They are highly invasive and lead to irregular tumor margins which are not precisely identifiable by medical imaging, thus rendering a precise enough resection very difficult. The understanding of glioma spread patterns is hence essential for both radiological therapy as well as surgical treatment. In this paper we propose a multiscale model for glioma growth including interactions of the cells with the underlying tissue network, along with proliferative effects. Our current accounting for two subpopulations of cells to accomodate proliferation according to the go-or-grow dichtomoty is an extension of the setting in [16]. As in that paper, we assume that cancer cells use neuronal fiber tracts as invasive pathways. Hence, the individual structure of brain tissue seems to be decisive for the tumor spread. Diffusion tensor imaging (DTI) is able to provide such information, thus opening the way for patient specific modeling of glioma invasion. Starting from a multiscale model involving subcellular (microscopic) and individual (mesoscale) cell dynamics, we perform a parabolic scaling to obtain an approximating reaction-diffusion-transport equation on the macroscale of the tumor cell population. Numerical simulations based on DTI data are carried out in order to assess the performance of our modeling approach. PMID:27105989
Studies of Arctic stratospheric ozone in a 2-D model including some effects of zonal asymmetries
Isaksen, I.S.A.; Rognerud, B.; Stordal, F. ); Coffey, M.T.; Mankin, W.G. )
1990-03-01
A two-dimensional (2-D) zonally averaged chemistry-transport model of the stratosphere has been extended to include some zonally asymmetric effects to study the chemically disturbed conditions in the Arctic winter during the occurrence of polar stratospheric clouds (PSCs). The model allows air parcels that have been in PSCs in the polar night to be exposed to sunlight during the passage south through a wave trough. Large enhancements of ClO are estimated as well as significant ozone reductions, most pronounced around the 20 km height level. The ozone depletions maximize in late March, about one month after the cease in PSC activity in the model, and amount to 5-8% in column ozone at 70{degree}N. In agreement with column measurements made from the DC-8, the model estimates an increase in the columns of HNO{sub 3} and ClONO{sub 2}, and a decrease in the HCl column within the polar vortex.
NASA Astrophysics Data System (ADS)
Van Geel, P. J.; Roy, S. D.
2002-09-01
A residual non-aqueous phase liquid (NAPL) present in the vadose zone can act as a contaminant source for many years as the compounds of concern partition to infiltrating groundwater and air contained in the soil voids. Current pressure-saturation-relative permeability relationships do not include a residual NAPL saturation term in their formulation. This paper presents the results of series of two- and three-phase pressure cell experiments conducted to evaluate the residual NAPL saturation and its impact on the pressure-saturation relationship. A model was proposed to incorporate a residual NAPL saturation term into an existing hysteretic three-phase parametric model developed by Parker and Lenhard [Water Resour. Res. 23(12) (1987) 2187], Lenhard and Parker [Water Resour. Res. 23(12) (1987) 2197] and Lenhard [J. Contam. Hydrol. 9 (1992) 243]. The experimental results indicated that the magnitude of the residual NAPL saturation was a function of the maximum total liquid saturation reached and the water saturation. The proposed model to incorporate a residual NAPL saturation term is similar in form to the entrapment model proposed by Parker and Lenhard, which was based on an expression presented by Land [Soc. Pet. Eng. J. (June 1968) 149].
A Mercury orientation model including non-zero obliquity and librations
NASA Astrophysics Data System (ADS)
Margot, Jean-Luc
2009-12-01
Planetary orientation models describe the orientation of the spin axis and prime meridian of planets in inertial space as a function of time. The models are required for the planning and execution of Earth-based or space-based observational work, e.g. to compute viewing geometries and to tie observations to planetary coordinate systems. The current orientation model for Mercury is inadequate because it uses an obsolete spin orientation, neglects oscillations in the spin rate called longitude librations, and relies on a prime meridian that no longer reflects its intended dynamical significance. These effects result in positional errors on the surface of ~1.5 km in latitude and up to several km in longitude, about two orders of magnitude larger than the finest image resolution currently attainable. Here we present an updated orientation model which incorporates modern values of the spin orientation, includes a formulation for longitude librations, and restores the dynamical significance to the prime meridian. We also use modern values of the orbit normal, spin axis orientation, and precession rates to quantify an important relationship between the obliquity and moment of inertia differences.
Habitability of super-Earth planets around other suns: models including Red Giant Branch evolution.
von Bloh, W; Cuntz, M; Schröder, K-P; Bounama, C; Franck, S
2009-01-01
The unexpected diversity of exoplanets includes a growing number of super-Earth planets, i.e., exoplanets with masses of up to several Earth masses and a similar chemical and mineralogical composition as Earth. We present a thermal evolution model for a 10 Earth-mass planet orbiting a star like the Sun. Our model is based on the integrated system approach, which describes the photosynthetic biomass production and takes into account a variety of climatological, biogeochemical, and geodynamical processes. This allows us to identify a so-called photosynthesis-sustaining habitable zone (pHZ), as determined by the limits of biological productivity on the planetary surface. Our model considers solar evolution during the main-sequence stage and along the Red Giant Branch as described by the most recent solar model. We obtain a large set of solutions consistent with the principal possibility of life. The highest likelihood of habitability is found for "water worlds." Only mass-rich water worlds are able to realize pHZ-type habitability beyond the stellar main sequence on the Red Giant Branch. PMID:19630504
A laboratory model of the aortic root flow including the coronary arteries
NASA Astrophysics Data System (ADS)
Querzoli, Giorgio; Fortini, Stefania; Espa, Stefania; Melchionna, Simone
2016-08-01
Cardiovascular flows have been extensively investigated by means of in vitro models to assess the prosthetic valve performances and to provide insight into the fluid dynamics of the heart and proximal aorta. In particular, the models for the study of the flow past the aortic valve have been continuously improved by including, among other things, the compliance of the vessel and more realistic geometries. The flow within the sinuses of Valsalva is known to play a fundamental role in the dynamics of the aortic valve since they host a recirculation region that interacts with the leaflets. The coronary arteries originate from the ostia located within two of the three sinuses, and their presence may significantly affect the fluid dynamics of the aortic root. In spite of their importance, to the extent of the authors' knowledge, coronary arteries were not included so far when modeling in vitro the transvalvular aortic flow. We present a pulse duplicator consisting of a passively pulsing ventricle, a compliant proximal aorta, and coronary arteries connected to the sinuses of Valsalva. The coronary flow is modulated by a self-regulating device mimicking the physiological mechanism, which is based on the contraction and relaxation of the heart muscle during the cardiac cycle. Results show that the model reproduces satisfyingly the coronary flow. The analysis of the time evolution of the velocity and vorticity fields within the aortic root reveals the main characteristics of the backflow generated through the aorta in order to feed the coronaries during the diastole. Experiments without coronary flow have been run for comparison. Interestingly, the lifetime of the vortex forming in the sinus of Valsalva during the systole is reduced by the presence of the coronaries. As a matter of fact, at the end of the systole, that vortex is washed out because of the suction generated by the coronary flow. Correspondingly, the valve closure is delayed and faster compared to the case with
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308
Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS
NASA Astrophysics Data System (ADS)
Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.
2010-12-01
With 4 million ha currently grown for ethanol in Brazil only, approximately half the global bioethanol production in 2005 (Smeets 2008), and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Indeed, ethanol made from biomass is currently the most widespread option for alternative transportation fuels. It was originally promoted as a carbon neutral energy resource that could bring energy independence to countries and local opportunities to farmers, until attention was drawn to its environmental and socio-economical drawbacks. It is still not clear to which extent it is a solution or a contributor to climate change mitigation. Dynamic Global Vegetation models can help address these issues and quantify the potential impacts of biofuels on ecosystems at scales ranging from on-site to global. The global agro-ecosystem model ORCHIDEE describes water, carbon and energy exchanges at the soil-atmosphere interface for a limited number of natural and agricultural vegetation types. In order to integrate agricultural management to the simulations and to capture more accurately the specificity of crops' phenology, ORCHIDEE has been coupled with the agronomical model STICS. The resulting crop-oriented vegetation model ORCHIDEE-STICS has been used so far to simulate temperate crops such as wheat, corn and soybean. As a generic ecosystem model, each grid cell can include several vegetation types with their own phenology and management practices, making it suitable to spatial simulations. Here, ORCHIDEE-STICS is altered to include sugar cane as a new agricultural Plant functional Type, implemented and parametrized using the STICS approach. An on-site calibration and validation is then performed based on biomass and flux chamber measurements in several sites in Australia and variables such as LAI, dry weight, heat fluxes and respiration are used to evaluate the ability of the model to simulate the specific
McKinney, Cliff; Renk, Kimberly
2008-06-01
Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample for this study consisted of 151 male and 324 female late adolescents, who reported on their mothers' and fathers' parenting style, their family environment, their mothers' and fathers' expectations for them, the conflict that they experience with their mothers and fathers, and their own adjustment. Overall, the variables had significant relationships with one another. Further, the male-father, male-mother, and female-father structural equation models that were examined suggested that parenting style has an indirect relationship with late adolescents' adjustment through characteristics of the family environment and the conflict that is experienced in families; such findings were not evident for the female-mother model. Thus, the examination of parent-late adolescent interactions should occur in the context of the gender of parents and their late adolescents. PMID:17710537
Ratios as a size adjustment in morphometrics.
Albrecht, G H; Gelvin, B R; Hartman, S E
1993-08-01
Simple ratios in which a measurement variable is divided by a size variable are commonly used but known to be inadequate for eliminating size correlations from morphometric data. Deficiencies in the simple ratio can be alleviated by incorporating regression coefficients describing the bivariate relationship between the measurement and size variables. Recommendations have included: 1) subtracting the regression intercept to force the bivariate relationship through the origin (intercept-adjusted ratios); 2) exponentiating either the measurement or the size variable using an allometry coefficient to achieve linearity (allometrically adjusted ratios); or 3) both subtracting the intercept and exponentiating (fully adjusted ratios). These three strategies for deriving size-adjusted ratios imply different data models for describing the bivariate relationship between the measurement and size variables (i.e., the linear, simple allometric, and full allometric models, respectively). Algebraic rearrangement of the equation associated with each data model leads to a correctly formulated adjusted ratio whose expected value is constant (i.e., size correlation is eliminated). Alternatively, simple algebra can be used to derive an expected value function for assessing whether any proposed ratio formula is effective in eliminating size correlations. Some published ratio adjustments were incorrectly formulated as indicated by expected values that remain a function of size after ratio transformation. Regression coefficients incorporated into adjusted ratios must be estimated using least-squares regression of the measurement variable on the size variable. Use of parameters estimated by any other regression technique (e.g., major axis or reduced major axis) results in residual correlations between size and the adjusted measurement variable. Correctly formulated adjusted ratios, whose parameters are estimated by least-squares methods, do control for size correlations. The size-adjusted
Lee, H.J.; Syvitski, J.P.M.; Parker, G.; Orange, Daniel L.; Locat, J.; Hutton, E.W.H.; Imran, J.
2002-01-01
Migrating sediment waves have been reported in a variety of marine settings, including submarine levee-fan systems, floors of fjords, and other basin or continental slope environments. Examination of such wave fields reveals nine diagnostic characteristics. When these characteristics are applied to several features previously attributed to submarine landslide deformation, they suggest that the features should most likely be reinterpreted as migrating sediment-wave fields. Sites that have been reinterpreted include the 'Humboldt slide' on the Eel River margin in northern California, the continental slope in the Gulf of Cadiz, the continental shelf off the Malaspina Glacier in the Gulf of Alaska, and the Adriatic shelf. A reassessment of all four features strongly suggests that numerous turbidity currents, separated by intervals of ambient hemipelagic sedimentation, deposited the wave fields over thousands of years. A numerical model of hyperpycnal discharge from the Eel River, for example, shows that under certain alongshore-current conditions, such events can produce turbidity currents that flow across the 'Humboldt slide', serving as the mechanism for the development of migrating sediment waves. Numerical experiments also demonstrate that where a series of turbidity currents flows across a rough seafloor (i.e. numerical steps), sediment waves can form and migrate upslope. Hemipelagic sedimentation between turbidity current events further facilitates the upslope migration of the sediment waves. Physical modelling of turbidity currents also confirms the formation and migration of seafloor bedforms. The morphologies of sediment waves generated both numerically and physically in the laboratory bear a strong resemblance to those observed in the field, including those that were previously described as submarine landslides.
Jet Noise Modeling for Coannular Nozzles Including the Effects of Chevrons
NASA Technical Reports Server (NTRS)
Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.
2003-01-01
Development of good predictive models for jet noise has always been plagued by the difficulty in obtaining good quality data over a wide range of conditions in different facilities.We consider such issues very carefully in selecting data to be used in developing our model. Flight effects are of critical importance, and none of the means of determining them are without significant problems. Free-jet flight simulation facilities are very useful, and can provide meaningful data so long as they can be analytically transformed to the flight frame of reference. In this report we show that different methodologies used by NASA and industry to perform this transformation produce very different results, especially in the rear quadrant; this compels us to rely largely on static data to develop our model, but we show reasonable agreement with simulated flight data when these transformation issues are considered. A persistent problem in obtaining good quality data is noise generated in the experimental facility upstream of the test nozzle: valves, elbows, obstructions, and especially the combustor can contribute significant noise, and much of this noise is of a broadband nature, easily confused with jet noise. Muffling of these sources is costly in terms of size as well as expense, and it is particularly difficult in flight simulation facilities, where compactness of hardware is very important, as discussed by Viswanathan (Ref. 13). We feel that the effects of jet density on jet mixing noise may have been somewhat obscured by these problems, leading to the variable density exponent used in most jet noise prediction procedures including our own. We investigate this issue, applying Occam s razor, (e.g., Ref. 14), in a search for the simplest physically meaningful model that adequately describes the observed phenomena. In a similar vein, we see no reason to reject the Lighthill approach; it provides a very solid basis upon which to build a predictive procedure, as we believe we
Boundary element modeling of earthquake site effects including the complete incident wavefield
NASA Astrophysics Data System (ADS)
Kim, Kyoung-Tae
Numerical modeling of earthquake site effects in realistic, three-dimensional structures, including high frequencies, low surface velocities and surface topography, has not been possible simply because the amount of computer memory constrains the number of grid points available. In principle, this problem is reduced in the Boundary Element Method (BEM) since only the surface of the velocity discontinuity is discretized; wave propagation both inside and outside this boundary is computed analytically. Equivalent body forces are determined on the boundary by solving a matrix equation containing frequency-domain displacement and stress Green's functions from every point on the boundary to every other point. This matrix problem has imposed a practical limit on the size or maximum frequency of previous BEM models. Although the matrix can be quite large, it also seems to be fairly sparse. We have used iterative matrix algorithms of the PETSc package and direct solution algorithms of the ScaLAPACK on the massively parallel supercomputers at Cornell, San Diego and Michigan. Preconditioning has been applied using blockwise ILU decomposition for the iterative approach or LU decomposition for the direct approach. The matrix equation is solved using the GMRES method for the iterative approach and a tri-diagonal solver for the direct approach. Previous BEM applications typically have assumed a single, incident plane wave. However, it is clear that for more realistic ground motion simulations, we need to consider the complete incident wavefield. If we assume that the basin or three-dimensional structure of interest is embedded in a surrounding plane-layered medium, we may use the propagator matrix method to solve for the displacements and stresses at depth on the boundary. This is done in the frequency domain with integration over wavenumber so that all P, S, mode conversions, reverberations and surface waves are included. The Boundary Element Method succeeds in modeling
Expanded rock blast modeling capabilities of DMC{_}BLAST, including buffer blasting
Preece, D.S.; Tidman, J.P.; Chung, S.H.
1996-12-31
A discrete element computer program named DMC{_}BLAST (Distinct Motion Code) has been under development since 1987 for modeling rock blasting. This program employs explicit time integration and uses spherical or cylindrical elements that are represented as circles in 2-D. DMC{_}BLAST calculations compare favorably with data from actual bench blasts. The blast modeling capabilities of DMC{_}BLAST have been expanded to include independently dipping geologic layers, top surface, bottom surface and pit floor. The pit can also now be defined using coordinates based on the toe of the bench. A method for modeling decked explosives has been developed which allows accurate treatment of the inert materials (stemming) in the explosive column and approximate treatment of different explosives in the same blasthole. A DMC{_}BLAST user can specify decking through a specific geologic layer with either inert material or a different explosive. Another new feature of DMC{_}BLAST is specification of an uplift angle which is the angle between the normal to the blasthole and a vector defining the direction of explosive loading on particles adjacent to the blasthole. A buffer (choke) blast capability has been added for situations where previously blasted material is adjacent to the free face of the bench preventing any significant lateral motion during the blast.
Dynamic modelling and analysis of multi-machine power systems including wind farms
NASA Astrophysics Data System (ADS)
Tabesh, Ahmadreza
2005-11-01
This thesis introduces a small-signal dynamic model, based on a frequency response approach, for the analysis of a multi-machine power system with special focus on an induction machine based wind farm. The proposed approach is an alternative method to the conventional eigenvalue analysis method which is widely employed for small-signal dynamic analyses of power systems. The proposed modelling approach is successfully applied and evaluated for a power system that (i) includes multiple synchronous generators, and (ii) a wind farm based on either fixed-speed, variable-speed, or doubly-fed induction machine based wind energy conversion units. The salient features of the proposed method, as compared with the conventional eigenvalue analysis method, are: (i) computational efficiency since the proposed method utilizes the open-loop transfer-function matrix of the system, (ii) performance indices that are obtainable based on frequency response data and quantitatively describe the dynamic behavior of the system, and (iii) capability to formulate various wind energy conversion unit, within a wind farm, in a modular form. The developed small-signal dynamic model is applied to a set of multi-machine study systems and the results are validated based on comparison (i) with digital time-domain simulation results obtained from PSCAD/EMTDC software tool, and (ii) where applicable with eigenvalue analysis results.
Wang, Y. T.; Xu, L. X.; Gui, Y. X.
2010-10-15
In this paper, we investigate the integrated Sachs-Wolfe effect in the quintessence cold dark matter model with constant equation of state and constant speed of sound in dark energy rest frame, including dark energy perturbation and its anisotropic stress. Comparing with the {Lambda}CDM model, we find that the integrated Sachs-Wolfe (ISW)-power spectrums are affected by different background evolutions and dark energy perturbation. As we change the speed of sound from 1 to 0 in the quintessence cold dark matter model with given state parameters, it is found that the inclusion of dark energy anisotropic stress makes the variation of magnitude of the ISW source uncertain due to the anticorrelation between the speed of sound and the ratio of dark energy density perturbation contrast to dark matter density perturbation contrast in the ISW-source term. Thus, the magnitude of the ISW-source term is governed by the competition between the alterant multiple of (1+3/2xc-circumflex{sub s}{sup 2}) and that of {delta}{sub de}/{delta}{sub m} with the variation of c-circumflex{sub s}{sup 2}.
Analytical model for radiative transfer including the effects of a rough material interface.
Giddings, Thomas E; Kellems, Anthony R
2016-08-20
The reflected and transmitted radiance due to a source located above a water surface is computed based on models for radiative transfer in continuous optical media separated by a discontinuous air-water interface with random surface roughness. The air-water interface is described as the superposition of random, unresolved roughness on a deterministic realization of a stochastic wave surface at resolved scales. Under the geometric optics assumption, the bidirectional reflection and transmission functions for the air-water interface are approximated by applying regular perturbation methods to Snell's law and including the effects of a random surface roughness component. Formal analytical solutions to the radiative transfer problem under the small-angle scattering approximation account for the effects of scattering and absorption as light propagates through the atmosphere and water and also capture the diffusive effects due to the interaction of light with the rough material interface that separates the two optical media. Results of the analytical models are validated against Monte Carlo simulations, and the approximation to the bidirectional reflection function is also compared to another well-known analytical model. PMID:27556978
A model of force balance in Jupiter's magnetodisc including hot plasma pressure anisotropy
NASA Astrophysics Data System (ADS)
Nichols, J. D.; Achilleos, N.; Cowley, S. W. H.
2015-12-01
We present an iterative vector potential model of force balance in Jupiter's magnetodisc that includes the effects of hot plasma pressure anisotropy. The fiducial model produces results that are consistent with Galileo magnetic field and plasma data over the whole radial range of the model. The hot plasma pressure gradient and centrifugal forces dominate in the regions inward of ˜20 RJ and outward of ˜50 RJ, respectively, while for realistic values of the pressure anisotropy, the anisotropy current is either the dominant component or at least comparable with the hot plasma pressure gradient current in the region in between. With the inclusion of hot plasma pressure anisotropy, the ˜1.2 and ˜2.7° shifts in the latitudes of the main oval and Ganymede footprint, respectively, associated with variations over the observed range of the hot plasma parameter Kh, which is the product of hot pressure and unit flux tube volume, are comparable to the shifts observed in auroral images. However, the middle magnetosphere is susceptible to the firehose instability, with peak equatorial values of βh∥e-βh⊥e≃1 - 2, for Kh=2.0 - 2.5 × 107 Pa m T-1. For larger values of Kh,βh∥e-βh⊥e exceeds 2 near ˜25 RJ and the model does not converge. This suggests that small-scale plasmoid release or "drizzle" of iogenic plasma may often occur in the middle magnetosphere, thus forming a significant mode of plasma mass loss, alongside plasmoids, at Jupiter.
Groundwater recharge in a hard rock aquifer: A conceptual model including surface-loading effects
NASA Astrophysics Data System (ADS)
Rodhe, Allan; Bockgård, Niclas
2006-11-01
SummaryThe groundwater level in a fractured rock aquifer in Sweden was found to respond quickly to rainfall, although the bedrock was covered by 10-m-thick till soil. A considerable portion of the response was caused by surface loading, i.e., by the weight increase of the soil due to the addition of water from precipitation, whereas the rest reflected recharge. The hypothesis that the bedrock aquifer was recharged by vertical flow from groundwater in the overlying soil was tested with a simple recharge model, in which the bedrock-groundwater levels were simulated from the soil-groundwater and estimated surface-loading variation. The model had three parameters: the ratio between the equivalent vertical hydraulic conductivity governing the recharge and the storage coefficient of the bedrock reservoir, the recession coefficient for the bedrock-groundwater level, and the bedrock-groundwater level at which the outflow ceases. The model could be reasonably well calibrated and validated to head observations in one of two boreholes. The fit to the seasonal variation was similar when calibrating the model with or without surface loading, but surface loading had to be included to properly simulate individual recharge events. The relative temporal variation in the fluxes could be determined by the calibration. The variation in the recharge was from -10% to +25% in relation to the mean flux. The variation in the discharge was only ±1%. By applying a storage coefficient of the reservoir of 5 × 10 -4, the simulated mean recharge was about 20 mm yr -1. The results support the hypothesis that the bedrock-groundwater at the site is fed by local recharge from the overlying soil aquifer.
Modelling and control of a microgrid including photovoltaic and wind generation
NASA Astrophysics Data System (ADS)
Hussain, Mohammed Touseef
Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.
Conversations with God: Prayer and Bargaining in Adjustment to Disability
ERIC Educational Resources Information Center
Rodriguez, Valerie J.; Glover-Graf, Noreen M.; Blanco, E. Lisette
2013-01-01
The role of religiosity and spirituality in the process of adjustment to disability is of increasing interest to rehabilitation professionals. Beginning with the Kubler-Ross models of grief and adjustment to disability and terminal illness, a number of stage models have included spiritual and religious interactions as a part of the adjustment…
NASA Astrophysics Data System (ADS)
Gabrielle, B.; Gagnaire, N.; Massad, R.; Prieur, V.; Python, Y.
2012-04-01
The potential greenhouse gas (GHG) savings resulting from the displacement of fossil energy sources by bioenergy mostly hinges on the uncertainty on the magnitude of nitrous oxide (N2O) emissions from arable soils occuring during feedstock production. These emissions are broadly related to fertilizer nitrogen input rates, but largely controlled by soil and climate factors which makes their estimation highly uncertain. Here, we set out to improve estimates of N2O emissions from bioenergy feedstocks by using ecosystem models and measurements and modeling of atmospheric N2O in the greater Paris (France) area. Ground fluxes were measured in two locations to assess the effect of soil type and management, crop type (including lignocellulosics such as triticale, switchgrass and miscanthus), and climate on N2O emission rates and dynamics. High-resolution maps of N2O emissions were generated over the Ile-de-France region (around Paris) with two ecosystem models using geographical databases on soils, weather data, land-use and crop management. The models were tested against ground flux measurements and the emission maps were fed into the atmospheric chemistry-transport model CHIMERE. The maps were tested by comparing the CHIMERE simulations with time series of N2O concentrations measured at various heights above the ground in two locations in 2007. The emissions of N2O, as integrated over the region, were used in a life-cycle assessment of representative biofuel pathways: bioethanol from wheat and sugar-beet (1st generation), and miscanthus (2nd generation chain); bio-diesel from oilseed rape. Effects related to direct and indirect land-use changes (in particular on soil carbon stocks) were also included in the assessment based on various land-use scenarios and literature references. The potential deployment of miscanthus was simulated by assuming it would be grown on the current sugar-beet growing area in Ile-de-France, or by converting land currently under permanent fallow
Ziadinov, I.; Mathis, A.; Trachsel, D.; Rysmukhambetova, A.; Abdyjaparov, T. A.; Kuttubaev, O. T.; Deplazes, P.; Torgerson, P. R.
2008-01-01
Echinococcosis is a major emerging zoonosis in central Asia. A cross-sectional study of dogs in four villages in rural Kyrgyzstan was undertaken to investigate the epidemiology and transmission of Echinococcus spp. A total of 466 dogs were examined by arecoline purgation for the presence of Echinococcus granulosus and Echinococcus multilocularis. In addition, a faecal sample from each dog was examined for taeniid eggs. Any taeniid eggs found were investigated using PCR techniques (multiplex and single target PCR) to improve the diagnostic sensitivity by confirming the presence of Echinococcus spp. and to identify E. granulosus strains. A total of 83 (18%) dogs had either E. granulosus adults in purge material and/or E. granulosus eggs in their faeces as confirmed by PCR. Three genotypes of E. granulosus: G1, G4 and the G6/7 complex were shown to be present in these dogs through subsequent sequence analysis. Purge analysis combined with PCR identified 50 dogs that were infected with adult E. multilocularis and/or had E. multilocularis eggs in their faeces (11%). Bayesian techniques were employed to estimate the true prevalence, the diagnostic sensitivity and specificity of the procedures used and the transmission parameters. The sensitivity of arecoline purgation for the detection of echinococcosis in dogs was rather low, with a value of 38% (Credible intervals (CIs) 27–50%) for E. granulosus and 21% (CIs 11–34%) for E. multilocularis. The specificity of arecoline purgation was assumed to be 100%. The sensitivity of coproscopy followed by PCR of the isolated eggs was calculated as 78% (CIs 57–87%) for E. granulosus and 50% (CIs 29–72%) for E. multilocularis with specificity of 93% (CIs 88–96%) and 100% (CIs 97–100), respectively. The 93% specificity of the coprological-PCR for E. granulosus could suggest coprophagia rather than true infections. After adjusting for the sensitivity of the diagnostic procedures, the estimated true prevalence of infection of
The importance of including imperfect detection models in eDNA experimental design.
Willoughby, Janna R; Wijayawardena, Bhagya K; Sundaram, Mekala; Swihart, Robert K; DeWoody, J Andrew
2016-07-01
Environmental DNA (eDNA) is DNA that has been isolated from field samples, and it is increasingly used to infer the presence or absence of particular species in an ecosystem. However, the combination of sampling procedures and subsequent molecular amplification of eDNA can lead to spurious results. As such, it is imperative that eDNA studies include a statistical framework for interpreting eDNA presence/absence data. We reviewed published literature for studies that utilized eDNA where the species density was known and compared the probability of detecting the focal species to the sampling and analysis protocols. Although biomass of the target species and the volume per sample did not impact detectability, the number of field replicates and number of samples from each replicate were positively related to detection. Additionally, increased number of PCR replicates and increased primer specificity significantly increased detectability. Accordingly, we advocate for increased use of occupancy modelling as a method to incorporate effects of sampling effort and PCR sensitivity in eDNA study design. Based on simulation results and the hierarchical nature of occupancy models, we suggest that field replicates, as opposed to molecular replicates, result in better detection probabilities of target species. PMID:27037675
Bao, J Y
1991-04-01
The commonly used microforceps have a much greater opening distance and spring resistance than needed. A piece of plastic ring or rubber band can be used to adjust the opening distance and reduce most of the spring resistance, making the user feel more comfortable and less fatigued. PMID:2051437
INTERIOR MODELS OF SATURN: INCLUDING THE UNCERTAINTIES IN SHAPE AND ROTATION
Helled, Ravit; Guillot, Tristan
2013-04-20
The accurate determination of Saturn's gravitational coefficients by Cassini could provide tighter constraints on Saturn's internal structure. Also, occultation measurements provide important information on the planetary shape which is often not considered in structure models. In this paper we explore how wind velocities and internal rotation affect the planetary shape and the constraints on Saturn's interior. We show that within the geodetic approach the derived physical shape is insensitive to the assumed deep rotation. Saturn's re-derived equatorial and polar radii at 100 mbar are found to be 54,445 {+-} 10 km and 60,365 {+-} 10 km, respectively. To determine Saturn's interior, we use one-dimensional three-layer hydrostatic structure models and present two approaches to include the constraints on the shape. These approaches, however, result in only small differences in Saturn's derived composition. The uncertainty in Saturn's rotation period is more significant: with Voyager's 10{sup h}39{sup m} period, the derived mass of heavy elements in the envelope is 0-7 M{sub Circled-Plus }. With a rotation period of 10{sup h}32{sup m}, this value becomes <4 M{sub Circled-Plus }, below the minimum mass inferred from spectroscopic measurements. Saturn's core mass is found to depend strongly on the pressure at which helium phase separation occurs, and is estimated to be 5-20 M{sub Circled-Plus }. Lower core masses are possible if the separation occurs deeper than 4 Mbar. We suggest that the analysis of Cassini's radio occultation measurements is crucial to test shape models and could lead to constraints on Saturn's rotation profile and departures from hydrostatic equilibrium.
A two-dimensional energy balance climate model including radiation and ice caps-albedo feedback
NASA Astrophysics Data System (ADS)
Yingyi, Chen; Jiping, Chao
1984-11-01
A simplified two-dimensional energy balance climate model including the solar and infrared radiation transports, the turbulent exchanges of heat in vertical and horizontal directions and the ice caps-albedo feedback is developed. The solutions show that if the atmosphere is considered as a grey body and the grey coefficient depends upon the distributions of absorption medium and cloudiness, both horizontal and vertical distribution of temperature are identical to the observation. On the other hand, comparing the models that the atmosphere is considered as a grey body with ones that the infrared radiation is parameterized as a linear function of temperature, as was considered by Budyko, Sellers(1969), then the results show that even though both of them can obtain the earth's surface temperature in agreement with the observation, the sensitivity of the climate to the changes of solar constant is very different. In the former case, the requirement for the ice edge to move southward from the normal 72°N to 50°N(i.e. where the glacial climate would take place) is that the solar constant should decrease by 13% to 16%. However, in the latter case, the climate is highly sensitive to the changes of solar radiation. In this case, the requirement of solar radiation occurring in the glacial climate should decrease by, 2% to 6%. According to the investigations mentioned above we must be careful when the parameterizations of the radiation and other processes are conducted in a climate model., otherwise the reliability of the results is suspicious.
Effects of neurosteroids on a model membrane including cholesterol: A micropipette aspiration study.
Balleza, Daniel; Sacchi, Mattia; Vena, Giulia; Galloni, Debora; Puia, Giulia; Facci, Paolo; Alessandrini, Andrea
2015-05-01
Amphiphilic molecules supposed to affect membrane protein activity could strongly interact also with the lipid component of the membrane itself. Neurosteroids are amphiphilic molecules that bind to plasma membrane receptors of cells in the central nervous system but their effect on membrane is still under debate. For this reason it is interesting to investigate their effects on pure lipid bilayers as model systems. Using the micropipette aspiration technique (MAT), here we studied the effects of a neurosteroid, allopregnanolone (3α,5α-tetrahydroprogesterone or Allo) and of one of its isoforms, isoallopregnanolone (3β,5α-tetrahydroprogesterone or isoAllo), on the physical properties of pure lipid bilayers composed by DOPC/bSM/chol. Allo is a well-known positive allosteric modulator of GABAA receptor activity while isoAllo acts as a non-competitive functional antagonist of Allo modulation. We found that Allo, when applied at nanomolar concentrations (50-200 nM) to a lipid bilayer model system including cholesterol, induces an increase of the lipid bilayer area and a decrease of the mechanical parameters. Conversely, isoAllo, decreases the lipid bilayer area and, when applied, at the same nanomolar concentrations, it does not affect significantly its mechanical parameters. We characterized the kinetics of Allo uptake by the lipid bilayer and we also discussed its aspects in relation to the slow kinetics of Allo gating effects on GABAA receptors. The overall results presented here show that a correlation exists between the modulation of Allo and isoAllo of GABAA receptor activity and their effects on a lipid bilayer model system containing cholesterol. PMID:25660752
Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2011-01-01
This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…
NASA Astrophysics Data System (ADS)
Segall, P.; Bradley, A. M.
2009-12-01
Seismic and geodetic observations indicate that slow-slip events (SSE) occur down-dip of locked megathrusts, in areas of high pore-pressure, p. We suggest that at low effective stress (σ -p) dilatancy stabilizes rate-weakening faults, whereas at higher (σ -p) thermal pressurization overwhelms dilatancy leading to dynamic slip. 2D simulations include rate-state (slip-law) friction with Linker-Dieterich normal stress effect, Segall-Rice dilatancy linked to state evolution, and heat and pore-fluid flow normal to the fault. The fault is loaded by down-dip slip at v∞ . We discretize the fault normal direction with log spacing, and employ explicit-implicit time integration to improve speed and accuracy. The governing equations involve numerous physical parameters, but relatively few non-dimensional groups. Ep={ɛ h}/ [2 {β (σ - p∞)}√ {{v{∞ }}/ {chyd}dc] and ET=({f_0Λ }/{2ρ cp})√ {{chydd_c v∞}}/{cth}2 represent dilatancy and shear heating efficiency, respectively. For a nominal set of parameters (given below), spatially uniform properties, and σ -p=1MPa, (Ep = 1.5 × 10-3, ET = 3 × 10-5), we find a series of propagating SSE, that are stabilized by dilatancy-induced drops in p at the rupture tips. For a broad range of parameters we observe slow-slip events driven by down-dip slip, with negative stress drop, as well as faster (but quasi-static) events that relax the accumulated stress. At 10 MPa effective stress, the models exhibit both SSE and dynamic ruptures. Following dynamic stress drops, a sequence of slow slip events is driven from the down-dip end of the fault, with generally increasing maximum slip-speeds. We also consider spatially variable (σ -p), ranging from 2 MPa down-dip to 10 MPa up-dip (with arctangent distribution such that 80% of the variation occurs across 20% of the fault), and uniform material properties. The models exhibit both SSE and dynamic events. Following a dynamic rupture there are initially no slow events, and the
NASA Astrophysics Data System (ADS)
Willis, M. J.; Wilson, T. J.; James, T. S.; Mazzotti, S.; Bevis, M. G.; Kendrick, E. C.; Brown, A. K.
2010-12-01
The IJ05 Antarctic ice sheet history is employed to drive a suite of approximately one thousand two-layered, laterally-homogeneous spherical Earth models and generate predictions of Antarctic crustal uplift due to glacial isostatic adjustment (GIA). GPS data collected between 1996 and 2010 on the flanks of the West Antarctic Rift System are used to produce bedrock uplift rates that are compared with the model predictions. The models that display the best fit to the data have softer, weaker upper-mantle viscosities than those published in many previous studies. A low viscosity upper-mantle is in agreement with seismic tomography that indicates that the upper mantle beneath much of West Antarctica has slower than average seismic shear-wave velocities. Best fit models further feature thin elastic lithospheres, a situation that is also corroborated by recent airborne gravity and seismic investigations. The best fit GIA models are used to generate crustal uplift rates and gravity changes that are larger than previously published models used to correct GRACE observations and infer Antarctic ice mass balance.The new models, which are the first GPS-constrained GIA corrections for GRACE in Antarctica, increases the ice-mass loss estimate for West Antarctica.
Fast hybrid SPECT simulation including efficient septal penetration modelling (SP-PSF).
Staelens, Steven; de Wit, Tim; Beekman, Freek
2007-06-01
Single photon emission computed tomography (SPECT) images are degraded by the detection of scattered photons and photons that penetrate the collimator septa. In this paper, a previously proposed Monte Carlo software that employs fast object scatter simulation using convolution-based forced detection (CFD) is extended towards a wide range of medium and high energy isotopes measured using various collimators. To this end, a fast method was developed for incorporating effects of septal penetrating (SP) photons. The SP contributions are obtained by calculating the object attenuation along the path from primary emission to detection followed by sampling a pre-simulated and scalable septal penetration point spread function (SP-PSF). We found that with only a very slight reduction in accuracy, we could accelerate the SP simulation by four orders of magnitude. To achieve this, we combined: (i) coarse sampling of the activity and attenuation distribution; (ii) simulation of the penetration only for a coarse grid of detector pixels followed by interpolation and (iii) neglection of SP-PSF elements below a certain threshold. By inclusion of this SP-PSF-based simulation it became possible to model both primary and septal penetrated photons while only 10% extra computation time was added to the CFD-based Monte Carlo simulator. As a result, a SPECT simulation of a patient-like distribution including SP now takes less than 5 s per projection angle on a dual processor PC. Therefore, the simulator is well-suited as an efficient projector for fully 3D model-based reconstruction or as a fast data-set generator for applications such as image processing optimization or observer studies. PMID:17505087
ERIC Educational Resources Information Center
Department of Defense, Washington, DC.
Updated Defense Economic Impact Modeling System (DEIMS) manpower data are provided. Skilled-labor demand by job categories and industrial sectors are estimated for 163 skill categories. Both defense and non-defense demands are presented for the years 1982 to 1987. The average annual percentage growth for the time period is also estimated. Data are…
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
NASA Astrophysics Data System (ADS)
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Rossetti, Fernanda F; Schneck, Emanuel; Fragneto, Giovanna; Konovalov, Oleg V; Tanaka, Motomu
2015-04-21
To understand the generic role of soft, hydrated biopolymers in adjusting interfacial interactions at biological interfaces, we designed a defined model of the cell-extracellular matrix contacts based on planar lipid membranes deposited on polymer supports (polymer-supported membranes). Highly uniform polymer supports made out of regenerated cellulose allow for the control of film thickness without changing the surface roughness and without osmotic dehydration. The complementary combination of specular neutron reflectivity and high-energy specular X-ray reflectivity yields the equilibrium membrane-substrate distances, which can quantitatively be modeled by computing the interplay of van der Waals interaction, hydration repulsion, and repulsion caused by the thermal undulation of membranes. The obtained results help to understand the role of a biopolymer in the interfacial interactions of cell membranes from a physical point of view and also open a large potential to generally bridge soft, biological matter and hard inorganic materials. PMID:25794040
NASA Astrophysics Data System (ADS)
Höning, D.; Spohn, T.
2014-12-01
By harvesting solar energy and converting it to chemical energy, photosynthetic life plays an important role in the energy budget of Earth [2]. This leads to alterations of chemical reservoirs eventually affecting the Earth's interior [4]. It further has been speculated [3] that the formation of continents may be a consequence of the evolution life. A steady state model [1] suggests that the Earth without its biosphere would evolve to a steady state with a smaller continent coverage and a dryer mantle than is observed today. We present a model including (i) parameterized thermal evolution, (ii) continental growth and destruction, and (iii) mantle water regassing and outgassing. The biosphere enhances the production rate of sediments which eventually are subducted. These sediments are assumed to (i) carry water to depth bound in stable mineral phases and (ii) have the potential to suppress shallow dewatering of the underlying sediments and crust due to their low permeability. We run a Monte Carlo simulation for various initial conditions and treat all those parameter combinations as success which result in the fraction of continental crust coverage observed for present day Earth. Finally, we simulate the evolution of an abiotic Earth using the same set of parameters but a reduced rate of continental weathering and erosion. Our results suggest that the origin and evolution of life could have stabilized the large continental surface area of the Earth and its wet mantle, leading to the relatively low mantle viscosity we observe at present. Without photosynthetic life on our planet, the Earth would be geodynamical less active due to a dryer mantle, and would have a smaller fraction of continental coverage than observed today. References[1] Höning, D., Hansen-Goos, H., Airo, A., Spohn, T., 2014. Biotic vs. abiotic Earth: A model for mantle hydration and continental coverage. Planetary and Space Science 98, 5-13. [2] Kleidon, A., 2010. Life, hierarchy, and the
Validation of gyrokinetic modelling of light impurity transport including rotation in ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Casson, F. J.; McDermott, R. M.; Angioni, C.; Camenen, Y.; Dux, R.; Fable, E.; Fischer, R.; Geiger, B.; Manas, P.; Menchero, L.; Tardini, G.; the ASDEX Upgrade Team
2013-06-01
Upgraded spectroscopic hardware and an improved impurity concentration calculation allow accurate determination of boron density in the ASDEX Upgrade tokamak. A database of boron measurements is compared to quasilinear and nonlinear gyrokinetic simulations including Coriolis and centrifugal rotational effects over a range of H-mode plasma regimes. The peaking of the measured boron profiles shows a strong anti-correlation with the plasma rotation gradient, via a relationship explained and reproduced by the theory. It is demonstrated that the rotodiffusive impurity flux driven by the rotation gradient is required for the modelling to reproduce the hollow boron profiles at higher rotation gradients. The nonlinear simulations validate the quasilinear approach, and, with the addition of perpendicular flow shear, demonstrate that each symmetry breaking mechanism that causes momentum transport also couples to rotodiffusion. At lower rotation gradients, the parallel compressive convection is required to match the most peaked boron profiles. The sensitivities of both datasets to possible errors is investigated, and quantitative agreement is found within the estimated uncertainties. The approach used can be considered a template for mitigating uncertainty in quantitative comparisons between simulation and experiment.
Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts
NASA Astrophysics Data System (ADS)
Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.
2016-07-01
Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( < 0.063 mm) ash (3-59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.
Harry, Herbert H.
1989-01-01
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
A Gender-Moderated Model of Family Relationships and Adolescent Adjustment
ERIC Educational Resources Information Center
Elizur, Yoel; Spivak, Amos; Ofran, Shlomit; Jacobs, Shira
2007-01-01
The objective of this study was to explain why adolescent girls with conduct problems (CP) are more at risk than boys to develop emotional distress (ED) in a sample composed of Israeli-born and immigrant youth from Ethiopia and the former Soviet Union (n = 305, ages 14-18). We tested a structural equation model and found a very good fit to the…
ERIC Educational Resources Information Center
Coln, Kristen L.; Jordan, Sara S.; Mercer, Sterett H.
2013-01-01
We examined positive and negative parenting practices and psychological control as mediators of the relations between constructive and destructive marital conflict and children's internalizing and externalizing problems in a unified model. Married mothers of 121 children between the ages of 6 and 12 completed questionnaires measuring marital…
Technology Transfer Automated Retrieval System (TEKTRAN)
Numerical modeling is the dominant method for quantifying water flow and the transport of dissolved constituents in surface soils as well as the deeper vadose zone. While the fundamental laws that govern the mechanics of the flow processes in terms of Richards' and convection-dispersion equations a...
ERIC Educational Resources Information Center
Terpstra, Teun; Lindell, Michael K.
2013-01-01
Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…
NASA Astrophysics Data System (ADS)
Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen
2016-08-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.
Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju
2013-01-01
The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507
PORTNEUF VALLEY, IDAHO PM-10 DISPERSION MODEL INCLUDING SECONDARY CHEMICAL FORMATION
A dispersion modeling effort for the Portneuf Valley, Pocatello, Idaho PM-10 attainment demonstration is underway. The model will treat the secondary chemical formation process, primarily sulfate and nitrate formation under both the aqueous and gas phases. The model will simul...
Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model
ERIC Educational Resources Information Center
Li, Weidong; Rukavina, Paul
2012-01-01
In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…
Extending Galactic Habitable Zone Modelling to Include the Emergence of Intelligent Life
NASA Astrophysics Data System (ADS)
Morrison, I. S.; Gowanlock, M. G.
2014-03-01
Previous studies of the Galactic Habitable Zone (GHZ) have been concerned with identifying those regions of the Galaxy that may favour the emergence of "complex life" - typically defined to be land-based life. A planet is deemed "habitable" if it meets a set of assumed criteria for supporting the emergence of such complex life. The notion of the GHZ, and the premise that sufficient chemical evolution is required for planet formation, was quantified by Gonzalez et al. (2001). This work was later broadened to include dangers to the formation and habitability of terrestrial planets by Lineweaver et al. (2004) and then studied using a Monte Carlo simulation on the resolution of individual stars in the previous work of Gowanlock et al. (2011). The model developed in the latter work considers the stellar number density distribution and formation history of the Galaxy, planet formation mechanisms and the hazards to planetary biospheres as a result of supernova sterilization events that take place in the vicinity of the planets. Based on timescales taken from the origin and evolution of complex life on Earth, the model suggests large numbers of potentially habitable planets exist in our Galaxy, with the greatest concentration likely being towards the inner Galaxy. In this work we extend the assessment of habitability to consider the potential for life to further evolve on habitable planets to the point of intelligence - which we term the propensity for the emergence of intelligent life. We assume the propensity is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the "resetting" effect of nearby supernovae. The model of Gowanlock et al. (2011) is used to produce a representative population of habitable planets by matching major observable properties of the Milky Way. Account is taken of the birth and death dates of each habitable planet and the timing of supernova events in each planet's vicinity. The times between
Evaluation of an Impedance Model for Perforates Including the Effect of Bias Flow
NASA Technical Reports Server (NTRS)
Betts, J. F.; Follet, J. I.; Kelly, J. J.; Thomas, R. H.
2000-01-01
A new bias flow impedance model is developed for perforated plates from basic principles using as little empiricisms as possible. A quality experimental database was used to determine the predictive validity of the model. Results show that the model performs better for higher (15%) rather than lower (5%) percent open area (POA) samples. Based on the least squares ratio of numerical vs. experimental results, model predictions were on average within 20% and 30% for the higher and lower (POA), respectively. It is hypothesized on the work of other investigators that at lower POAs the higher fluid velocities in the perforate's orifices start forming unsteady vortices, which is not accounted for in our model. The numerical model, in general also underpredicts the experiments. It is theorized that the actual acoustic C(sub D) is lower than the measured raylometer C(sub D) used in the model. Using a larger C(sub D) makes the numerical model predict lower impedances. The frequency domain model derived in this paper shows very good agreement with another model derived using a time domain approach.
NASA Astrophysics Data System (ADS)
Öktem, Hakan; Pearson, Ronald; Egiazarian, Karen
2003-12-01
Following the complete sequencing of several genomes, interest has grown in the construction of genetic regulatory networks, which attempt to describe how different genes work together in both normal and abnormal cells. This interest has led to significant research in the behavior of abstract network models, with Boolean networks emerging as one particularly popular type. An important limitation of these networks is that their time evolution is necessarily periodic, motivating our interest in alternatives that are capable of a wider range of dynamic behavior. In this paper we examine one such class, that of continuous-time Boolean networks, a special case of the class of Boolean delay equations (BDEs) proposed for climatic and seismological modeling. In particular, we incorporate a biologically motivated refractory period into the dynamic behavior of these networks, which exhibit binary values like traditional Boolean networks, but which, unlike Boolean networks, evolve in continuous time. In this way, we are able to overcome both computational and theoretical limitations of the general class of BDEs while still achieving dynamics that are either aperiodic or effectively so, with periods many orders of magnitude longer than those of even large discrete time Boolean networks.
Including slot harmonics to mechanical model of two-pole induction machine with a force actuator
NASA Astrophysics Data System (ADS)
Sinervo, Anssi; Arkkio, Antero
2012-10-01
A simple mechanical model is identified for a two-pole induction machine that has a four-pole extra winding as a force actuator. The actuator can be used to suppress rotor vibrations. Forces affecting the rotor of the induction machine are separated into actuator force, purely mechanical force due to mass unbalance, and force caused by unbalanced magnetic pull from higher harmonics and unipolar flux. The force due to higher harmonics is embedded to the mechanical model. Parameters of the modified mechanical model are identified from measurements and the modifications are shown to be necessary. The force produced by the actuator is calculated using the mechanical model, direct flux measurements, and voltage and current of the force actuator. All three methods are shown to give matching results proving that the mechanical model can be used in vibration control. The test machine is shown to have time periodic behavior and discrete Fourier analysis is used to obtain time-invariant model parameters.
A model for multi-finger HBTs including current gain collapse effects
NASA Astrophysics Data System (ADS)
Garlapati, Akhil; Prasad, Sheila; Vempada, Pradeep; Munshi, Kambiz
2003-11-01
A common-emitter equivalent circuit model which represents both the self-heating and the current collapse as feedback from the collector current to the base-emitter voltage is developed for multi-finger InGaAs/GaAs HBTs. The modified Ebers-Moll model is verified by comparing the simulated and measured results. Good agreement is also achieved for the scattering parameters and I- V characteristics confirming the validity of the model for high frequency applications.
NASA Astrophysics Data System (ADS)
Hauser, H.; Melikhov, Y.; Jiles, D. C.
2007-10-01
Two recent theoretical hysteresis models (Jiles-Atherton model and energetic model) are examined with respect to their capability to describe the dependence of the magnetization on magnetic field, microstructure, and anisotropy. It is shown that the classical Rayleigh law for the behavior of magnetization at low fields and the Stoner-Wohlfarth theory of domain magnetization rotation in noninteracting magnetic single domain particles can be considered as limiting cases of a more general theoretical treatment of hysteresis in ferromagnetism.
SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans
Young, L; Yang, F
2014-06-01
Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.
A New Finite-Conductivity Droplet Evaporation Model Including Liquid Turbulence Effect
NASA Technical Reports Server (NTRS)
Balasubramanyam, M. S.; Chen, C. P.; Trinh, H. P.
2006-01-01
A new approach to account for finite thermal conductivity and turbulence effects within atomizing droplets of an evaporating spray is presented in this paper. The model is an extension of the T-blob and T-TAB atomization/spray model of Trinh and Chen [9]. This finite conductivity model is based on the two-temperature film theory in which the turbulence characteristics of the droplet are used to estimate the effective thermal diffusivity for the liquid-side film thickness. Both one-way and two-way coupled calculations were performed to investigate the performance cf this model against the published experimental data.
NASA Astrophysics Data System (ADS)
Gupta, Santosh Kumar
2015-12-01
2D Analytical model of the body center potential (BCP) in short channel junctionless Cylindrical Surrounding Gate (JLCSG) MOSFETs is developed using evanescent mode analysis (EMA). This model also incorporates the gate bias dependent inner and outer fringing capacitances due to the gate-source/drain fringing fields. The developed model provides results in good agreement with simulated results for variations of different physical parameters of JLCSG MOSFET viz. gate length, channel radius, doping concentration, and oxide thickness. Using the BCP, an analytical model for the threshold voltage has been derived and validated against results obtained from 3D device simulator.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1994-06-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon
An Effective Model to Increase Student Attitude and Achievement: Narrative Including Analogies
ERIC Educational Resources Information Center
Akkuzu, Nalan; Akcay, Husamettin
2011-01-01
This study describes the analogical models and narratives used to introduce and teach Grade 9 chemical covalent compounds which are relatively abstract and difficult for students. We explained each model's development during the lessons and analyzed understanding students derived from these learning materials. In this context, achievement,…
NASA Astrophysics Data System (ADS)
Neumann, R. B.; Cardon, Z. G.; Rockwell, F. E.; Teshera-Levye, J.; Zwieniecki, M.; Holbrook, N. M.
2013-12-01
The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical, and ecological consequences of HR depend on the amount of redistributed water, while the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two eco-types of Helianthus annuus L. in split-pot experiments, we examined how well the widely used HR modeling formulation developed by Ryel et al. (2002) could match experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive nighttime transpiration, and though over the last decade it has become more widely recognized that nighttime transpiration occurs in multiple species and many ecosystems, the original Ryel et al. (2002) formulation does not include the effect of nighttime transpiration on HR. We developed and added a representation of nighttime transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and nighttime stomatal behavior changed, both influencing HR.
NASA Astrophysics Data System (ADS)
Gowan, E. J.; Tregoning, P.; Purcell, A.
2013-12-01
Uncertainties in ice sheet extent and thickness during the retreat of the western Laurentide Ice Sheet from the last glacial maximum affect estimates of its contribution to global climate and sea level change during the late Pleistocene and early Holocene. These difficulties arise due to a lack of chronological constraints on the timing of margin retreat in many areas and a lack of observations of the glacio-isostatic deformation due the ice sheet. We present a model of the western Laurentide ice sheet in North America based on new ice margin reconstructions and well dated glacial lake strandlines. The model of the Laurentide ice sheet is constructed based on the assumption of perfectly plastic, steady state conditions with temporally variable basal shear stress and margin location. Initial models of basal shear stress were based on modern surficial geology and geography, and adjusted in an iterative process to reflect the volume of ice needed to fit observations of earth deformation caused by the ice sheet. The ice margins were developed by determining the minimum timing of retreat and using that as a constraint on the absolute maximum possible ice margin location. By using the ice margin as the starting point of modelling, assumptions on the location of ice domes and saddles were avoided. Initial results of the modelling indicate that ice thickness remained below 1500 m throughout the Western Canadian Sedimentary Basin region at the last glacial maximum as a result of low basal shear stress. Modelled flow direction matches geomorphic ice flow indicators lending confidence to the glaciological model. Ice sheet margin retreat was limited until after 15,000 cal yr BP. The most significant ice volume losses happened after retreat from southern Alberta and after retreat began on the Canadian Shield.
NASA Technical Reports Server (NTRS)
Ukanwa, A. O.; Stermole, F. J.; Golden, J. O.
1972-01-01
Natural convection effects in phase change thermal control devices were studied. A mathematical model was developed to evaluate natural convection effects in a phase change test cell undergoing solidification. Although natural convection effects are minimized in flight spacecraft, all phase change devices are ground tested. The mathematical approach to the problem was to first develop a transient two-dimensional conduction heat transfer model for the solidification of a normal paraffin of finite geometry. Next, a transient two-dimensional model was developed for the solidification of the same paraffin by a combined conduction-natural-convection heat transfer model. Throughout the study, n-hexadecane (n-C16H34) was used as the phase-change material in both the theoretical and the experimental work. The models were based on the transient two-dimensional finite difference solutions of the energy, continuity, and momentum equations.
A Sheath Model for Negative Ion Sources Including the Formation of a Virtual Cathode
McAdams, R.; King, D. B.; Surrey, E.
2011-09-26
A one dimensional model of the sheath between the plasma and the wall in a negative ion source has been developed. The plasma consists of positive ions, electrons and negative ions. The model takes into account the emission of negative ions from the wall into the sheath and thus represents the conditions in a caesiated ion source with surface production of negative ions. At high current densities of the emitted negative ions, the sheath is unable to support the transport of all the negative ions to the plasma and a virtual cathode is formed. This model takes this into account and allows the calculation of the transported negative ions across the sheath with the virtual cathode. The model has been extended to allow the linkage between plasma conditions at the sheath edge and the plasma to be made. Comparisons are made between the results of the model and experimental measurements.
Simulated village locations in Thailand: A multi-scale model including a neural network approach
Malanson, George P.; Entwisle, Barbara
2010-01-01
The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area. PMID:21399748
Simulated village locations in Thailand: A multi-scale model including a neural network approach.
Tang, Wenwu; Malanson, George P; Entwisle, Barbara
2009-04-01
The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area. PMID:21399748
A feedback model for leukemia including cell competition and the action of the immune system
NASA Astrophysics Data System (ADS)
Balea, S.; Halanay, A.; Neamtu, M.
2014-12-01
A mathematical model, coupling the dynamics of short-term stem-like cells and mature leukocytes in leukemia with that of the immune system, is investigated. The model is described by a system of nine delay differential equations with nine delays. Three equilibrium points E0, E1, E2 are highlighted. The stability and the existence of the Hopf bifurcation for the equilibrium points are investigated. In the analysis of the model, the rate of asymmetric division and the rate of symmetric division are very important.
A physical-based pMOSFETs threshold voltage model including the STI stress effect
NASA Astrophysics Data System (ADS)
Wei, Wu; Gang, Du; Xiaoyan, Liu; Lei, Sun; Jinfeng, Kang; Ruqi, Han
2011-05-01
The physical threshold voltage model of pMOSFETs under shallow trench isolation (STI) stress has been developed. The model is verified by 130 nm technology layout dependent measurement data. The comparison between pMOSFET and nMOSFET model simulations due to STI stress was conducted to show that STI stress induced less threshold voltage shift and more mobility shift for the pMOSFET. The circuit simulations of a nine stage ring oscillator with and without STI stress proved about 11% improvement of average delay time. This indicates the importance of STI stress consideration in circuit design.
Modeling grain size adjustments in the downstream reach following run-of-river development
NASA Astrophysics Data System (ADS)
Fuller, Theodore K.; Venditti, Jeremy G.; Nelson, Peter A.; Palen, Wendy J.
2016-04-01
Disruptions to sediment supply continuity caused by run-of-river (RoR) hydropower development have the potential to cause downstream changes in surface sediment grain size which can influence the productivity of salmon habitat. The most common approach to understanding the impacts of RoR hydropower is to study channel changes in the years following project development, but by then, any impacts are manifest and difficult to reverse. Here we use a more proactive approach, focused on predicting impacts in the project planning stage. We use a one-dimensional morphodynamic model to test the hypothesis that the greatest risk of geomorphic change and impact to salmon habitat from a temporary sediment supply disruption exists where predevelopment sediment supply is high and project design creates substantial sediment storage volume. We focus on the potential impacts in the reach downstream of a powerhouse for a range of development scenarios that are typical of projects developed in the Pacific Northwest and British Columbia. Results indicate that increases in the median bed surface size (D50) are minor if development occurs on low sediment supply streams (<1 mm for supply rates 1 × 10-5 m2 s-1 or lower), and substantial for development on high sediment supply streams (8-30 mm for supply rates between 5.5 × 10-4 and 1 × 10-3 m2 s-1). However, high sediment supply streams recover rapidly to the predevelopment surface D50 (˜1 year) if sediment supply can be reestablished.
Progress in turbulence modeling for complex flow fields including effects of compressibility
NASA Technical Reports Server (NTRS)
Wilcox, D. C.; Rubesin, M. W.
1980-01-01
Two second-order-closure turbulence models were devised that are suitable for predicting properties of complex turbulent flow fields in both incompressible and compressible fluids. One model is of the "two-equation" variety in which closure is accomplished by introducing an eddy viscosity which depends on both a turbulent mixing energy and a dissipation rate per unit energy, that is, a specific dissipation rate. The other model is a "Reynolds stress equation" (RSE) formulation in which all components of the Reynolds stress tensor and turbulent heat-flux vector are computed directly and are scaled by the specific dissipation rate. Computations based on these models are compared with measurements for the following flow fields: (a) low speed, high Reynolds number channel flows with plane strain or uniform shear; (b) equilibrium turbulent boundary layers with and without pressure gradients or effects of compressibility; and (c) flow over a convex surface with and without a pressure gradient.
Allen, D.H.; Helms, K.L.E.; Hurtado, L.D.
1999-04-06
A model is developed herein for predicting the mechanical response of inelastic crystalline solids. Particular emphasis is given to the development of microstructural damage along grain boundaries, and the interaction of this damage with intragranular inelasticity caused by dislocation dissipation mechanisms. The model is developed within the concepts of continuum mechanics, with special emphasis on the development of internal boundaries in the continuum by utilizing a cohesive zone model based on fracture mechanics. In addition, the crystalline grains are assumed to be characterized by nonlinear viscoplastic mechanical material behavior in order to account for dislocation generation and migration. Due to the nonlinearities introduced by the crack growth and viscoplastic constitution, a numerical algorithm is utilized to solve representative problems. Implementation of the model to a finite element computational algorithm is therefore briefly described. Finally, sample calculations are presented for a polycrystalline titanium alloy with particular focus on effects of scale on the predicted response.
NASA Technical Reports Server (NTRS)
Holland, D. B.; Virgin, L. N.; Belvin, W. K.
2003-01-01
This paper presents a parameter study of the effect of boom axial loading on the global dynamics of a 2-meter solar sail scale model. The experimental model used is meant for building expertise in finite element analysis and experimental execution, not as a predecessor to any planned flight mission or particular design concept. The results here are to demonstrate the ability to predict and measure structural dynamics and mode shapes in the presence of axial loading.
NASA Astrophysics Data System (ADS)
Toyokuni, Genti; Takenaka, Hiroshi
2012-06-01
We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic
Deuterium and oxygen 18 in precipitation: Isotropic model, including mixed cloud processes
Ciais, P.; Jouzel, J.
1994-08-01
Modeling the isotropic ratios of precipitation in cold regions meets the problem of `switching` from the vapor-liquid transition to the vapor-ice transition at the oneset of snow formation. The one-dimensional model (mixed cloud isotopic model (MCIM)) described in this paper focuses on the fractionation of water isotopes in mixed clouds, where both liquid droplets and ice crystals can coexist for a given range of temperatures. This feature is linked to the existence of specific saturation conditions within the cloud, allowing droplets to evaporate while the water vapor condensates onto ice crystals. The isotopic composition of the different airborne phases and the precipitation is calculated throughout the condensation history of an isolated air mass moving over the Antarctic ice sheet. The results of the MCIM are compared to surface snow data both for the isotopic ratios and the deuterium excesses. The sensitivity of the model is compared to previous one-dimensional models. Our main result is that accounting specifically for the microphysics of mixed stratiform clouds (Bergeron-Findesein process) does not invalidate the results of earlier modeling studies.
Draft: Modeling Two-Phase Flow in Porous Media Including Fluid-Fluid Interfacial Area
Crandall, Dustin; Niessner, Jennifer; Hassanizadeh, S Majid
2008-01-01
We present a new numerical model for macro-scale twophase flow in porous media which is based on a physically consistent theory of multi-phase flow.The standard approach for modeling the flow of two fluid phases in a porous medium consists of a continuity equation for each phase, an extended form of Darcy’s law as well as constitutive relationships for relative permeability and capillary pressure. This approach is known to have a number of important shortcomings and, in particular, it does not account for the presence and role of fluid - fluid interfaces. An alternative is to use an extended model which is founded on thermodynamic principles and is physically consistent. In addition to the standard equations, the model uses a balance equation for specific interfacial area. The constitutive relationship for capillary pressure involves not only saturation, but also specific interfacial area. We show how parameters can be obtained for the alternative model using experimental data from a new kind of flow cell and present results of a numerical modeling study
Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation
NASA Astrophysics Data System (ADS)
Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.
2011-12-01
Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.
Breather solutions of a nonlinear DNA model including a longitudinal degree of freedom
NASA Astrophysics Data System (ADS)
Agarwal, J.; Hennig, D.
2003-05-01
We present a model of the DNA double helix assigning three degrees of freedom to each pair of nucleotides. The model is an extension of the Barbi-Cocco-Peyrard (BCP) model in the sense that the current model allows for longitudinal motions of the nucleotides parallel to the helix axis. The molecular structure of the double helix is modelled by a system of coupled oscillators. The nucleotides are represented by point masses and coupled via point-point interaction potentials. The latter describe the covalent and hydrogen bonds responsible for the secondary structure of DNA. We obtain breather solutions using an established method for the construction of breathers on nonlinear lattices starting from the anti-coupling limit. In order to apply this method we analyse the phonon spectrum of the linearised system corresponding to our model. The obtained breathing motion consists of a local opening and re-closing of base pairs combined with a local untwist of the helix. The motions in longitudinal direction are of much lower amplitudes than the radial and angular elongations.
A two-phase solid/fluid model for dense granular flows including dilatancy effects
NASA Astrophysics Data System (ADS)
Mangeney, Anne; Bouchut, Francois; Fernandez-Nieto, Enrique; Koné, El-Hadj; Narbona-Reina, Gladys
2016-04-01
Describing grain/fluid interaction in debris flows models is still an open and challenging issue with key impact on hazard assessment [{Iverson et al.}, 2010]. We present here a two-phase two-thin-layer model for fluidized debris flows that takes into account dilatancy effects. It describes the velocity of both the solid and the fluid phases, the compression/dilatation of the granular media and its interaction with the pore fluid pressure [{Bouchut et al.}, 2016]. The model is derived from a 3D two-phase model proposed by {Jackson} [2000] based on the 4 equations of mass and momentum conservation within the two phases. This system has 5 unknowns: the solid and fluid velocities, the solid and fluid pressures and the solid volume fraction. As a result, an additional equation inside the mixture is necessary to close the system. Surprisingly, this issue is inadequately accounted for in the models that have been developed on the basis of Jackson's work [{Bouchut et al.}, 2015]. In particular, {Pitman and Le} [2005] replaced this closure simply by imposing an extra boundary condition at the surface of the flow. When making a shallow expansion, this condition can be considered as a closure condition. However, the corresponding model cannot account for a dissipative energy balance. We propose here an approach to correctly deal with the thermodynamics of Jackson's model by closing the mixture equations by a weak compressibility relation following {Roux and Radjai} [1998]. This relation implies that the occurrence of dilation or contraction of the granular material in the model depends on whether the solid volume fraction is respectively higher or lower than a critical value. When dilation occurs, the fluid is sucked into the granular material, the pore pressure decreases and the friction force on the granular phase increases. On the contrary, in the case of contraction, the fluid is expelled from the mixture, the pore pressure increases and the friction force diminishes. To
Genomic prediction of growth in pigs based on a model including additive and dominance effects.
Lopes, M S; Bastiaansen, J W M; Janss, L; Knol, E F; Bovenhuis, H
2016-06-01
Independent of whether prediction is based on pedigree or genomic information, the focus of animal breeders has been on additive genetic effects or 'breeding values'. However, when predicting phenotypes rather than breeding values of an animal, models that account for both additive and dominance effects might be more accurate. Our aim with this study was to compare the accuracy of predicting phenotypes using a model that accounts for only additive effects (MA) and a model that accounts for both additive and dominance effects simultaneously (MAD). Lifetime daily gain (DG) was evaluated in three pig populations (1424 Pietrain, 2023 Landrace, and 2157 Large White). Animals were genotyped using the Illumina SNP60K Beadchip and assigned to either a training data set to estimate the genetic parameters and SNP effects, or to a validation data set to assess the prediction accuracy. Models MA and MAD applied random regression on SNP genotypes and were implemented in the program Bayz. The additive heritability of DG across the three populations and the two models was very similar at approximately 0.26. The proportion of phenotypic variance explained by dominance effects ranged from 0.04 (Large White) to 0.11 (Pietrain), indicating that importance of dominance might be breed-specific. Prediction accuracies were higher when predicting phenotypes using total genetic values (sum of breeding values and dominance deviations) from the MAD model compared to using breeding values from both MA and MAD models. The highest increase in accuracy (from 0.195 to 0.222) was observed in the Pietrain, and the lowest in Large White (from 0.354 to 0.359). Predicting phenotypes using total genetic values instead of breeding values in purebred data improved prediction accuracy and reduced the bias of genomic predictions. Additional benefit of the method is expected when applied to predict crossbred phenotypes, where dominance levels are expected to be higher. PMID:26676611
Results of including geometric nonlinearities in an aeroelastic model of an F/A-18
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.
1989-01-01
An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.
A model of protein translation including codon bias, nonsense errors, and ribosome recycling.
Gilchrist, Michael A; Wagner, Andreas
2006-04-21
We present and analyse a model of protein translation at the scale of an individual messenger RNA (mRNA) transcript. The model we develop is unique in that it incorporates the phenomena of ribosome recycling and nonsense errors. The model conceptualizes translation as a probabilistic wave of ribosome occupancy traveling down a heterogeneous medium, the mRNA transcript. Our results show that the heterogeneity of the codon translation rates along the mRNA results in short-scale spikes and dips in the wave. Nonsense errors attenuate this wave on a longer scale while ribosome recycling reinforces it. We find that the combination of nonsense errors and codon usage bias can have a large effect on the probability that a ribosome will completely translate a transcript. We also elucidate how these forces interact with ribosome recycling to determine the overall translation rate of an mRNA transcript. We derive a simple cost function for nonsense errors using our model and apply this function to the yeast (Saccharomyces cervisiae) genome. Using this function we are able to detect position dependent selection on codon bias which correlates with gene expression levels as predicted a priori. These results indirectly validate our underlying model assumptions and confirm that nonsense errors can play an important role in shaping codon usage bias. PMID:16171830
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M.; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel
2015-01-15
Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.
A bone remodelling model including the effect of damage on the steering of BMUs.
Martínez-Reina, J; Reina, I; Domínguez, J; García-Aznar, J M
2014-04-01
Bone remodelling in cortical bone is performed by the so-called basic multicellular units (BMUs), which produce osteons after completing the remodelling sequence. Burger et al. (2003) hypothesized that BMUs follow the direction of the prevalent local stress in the bone. More recently, Martin (2007) has shown that BMUs must be somehow guided by microstructural damage as well. The interaction of both variables, strain and damage, in the guidance of BMUs has been incorporated into a bone remodelling model for cortical bone. This model accounts for variations in porosity, anisotropy and damage level. The bone remodelling model has been applied to a finite element model of the diaphysis of a human femur. The trajectories of the BMUs have been analysed throughout the diaphysis and compared with the orientation of osteons measured experimentally. Some interesting observations, like the typical fan arrangement of osteons near the periosteum, can be explained with the proposed remodelling model. Moreover, the efficiency of BMUs in damage repairing has been shown to be greater if BMUs are guided by damage. PMID:24445006
Partial covariate adjusted regression
Şentürk, Damla; Nguyen, Danh V.
2008-01-01
Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed (Şentürk and Müller, 2005). The available data has been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set. PMID:20126296
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
Naghavi, Nadia; Hosseini, Farideh S; Sardarabadi, Mohammad; Kalani, Hadi
2016-09-01
In this paper, an adaptive model for tumor induced angiogenesis is developed that integrates generation and diffusion of a growth factor originated from hypoxic cells, adaptive sprouting from a parent vessel, blood flow and structural adaptation. The proposed adaptive sprout spacing model (ASS) determines position, time and number of sprouts which are activated from a parent vessel and also the developed vascular network is modified by a novel sprout branching prediction algorithm. This algorithm couples local vascular endothelial growth factor (VEGF) concentrations, stresses due to the blood flow and stochastic branching to the structural reactions of each vessel segment in response to mechanical and biochemical stimuli. The results provide predictions for the time-dependent development of the network structure, including the position and diameters of each segment and the resulting distributions of blood flow and VEGF. Considering time delays between sprout progressions and number of sprouts activated at different time durations provides information about micro-vessel density in the network. Resulting insights could be useful for motivating experimental investigations of vascular pattern in tumor induced angiogenesis and development of therapies targeting angiogenesis. PMID:27179697
NASA Astrophysics Data System (ADS)
Jung, Sunwook; Do, Thuy; Sturtevant, John
2015-03-01
For more than five decades, the semiconductor industry has overcome technology challenges with innovative ideas that have continued to enable Moore's Law. It is clear that multi-patterning lithography is vital for 20nm half pitch using 193i. Multi-patterning exposure sequences and pattern multiplication processes can create complicated tolerance accounting due to the variability associated with the component processes. It is essential to ensure good predictive accuracy of compact etch models used in multipatterning simulation. New modelforms have been developed to account for etch bias behavior at 20 nm and below. The new modeling components show good results in terms of global fitness and some improved predication capability for specific features. We've also investigated a new methodology to make the etch model aware of 3D resist profiles.
Jauchem, James R
2010-03-01
Conducted energy weapons (CEWs) are used by law-enforcement personnel to incapacitate individuals quickly and effectively, without causing lethality. CEWs have been deployed for relatively long or repeated exposures during law-enforcement operations. The purpose of this technical note is to describe, in detail, some aspects of an anesthetized swine model used in our laboratory and to answer specific questions related to the model. In particular, tiletamine/zolazepam-induced, propofol-maintained anesthesia appears to be a useful technique for studying effects of CEW applications on muscle contraction and blood factors such as muscle enzymes. Because effects of CEWs on breathing have not been fully elucidated, a spontaneously breathing model is preferable to one in which mechanical ventilation is supplied. Placement of the swine in a supine position may facilitate measurement of muscle contractions, without compromising other physiological parameters. PMID:20141556
Callisto plasma interactions: Hybrid modeling including induction by a subsurface ocean
NASA Astrophysics Data System (ADS)
Lindkvist, Jesper; Holmström, Mats; Khurana, Krishan K.; Fatemi, Shahab; Barabash, Stas
2015-06-01
By using a hybrid plasma solver (ions as particles and electrons as a fluid), we have modeled the interaction between Callisto and Jupiter's magnetosphere for variable ambient plasma parameters. We compared the results with the magnetometer data from flybys (C3, C9, and C10) by the Galileo spacecraft. Modeling the interaction between Callisto and Jupiter's magnetosphere is important to establish the origin of the magnetic field perturbations observed by Galileo and thought to be related to a subsurface ocean. Using typical upstream magnetospheric plasma parameters and a magnetic dipole corresponding to the inductive response inside the moon, we show that the model results agree well with observations for the C3 and C9 flybys, but agrees poorly with the C10 flyby close to Callisto. The study does support the existence of a subsurface ocean at Callisto.
End-to-end Coronagraphic Modeling Including a Low-order Wavefront Sensor
NASA Technical Reports Server (NTRS)
Krist, John E.; Trauger, John T.; Unwin, Stephen C.; Traub, Wesley A.
2012-01-01
To evaluate space-based coronagraphic techniques, end-to-end modeling is necessary to simulate realistic fields containing speckles caused by wavefront errors. Real systems will suffer from pointing errors and thermal and motioninduced mechanical stresses that introduce time-variable wavefront aberrations that can reduce the field contrast. A loworder wavefront sensor (LOWFS) is needed to measure these changes at a sufficiently high rate to maintain the contrast level during observations. We implement here a LOWFS and corresponding low-order wavefront control subsystem (LOWFCS) in end-to-end models of a space-based coronagraph. Our goal is to be able to accurately duplicate the effect of the LOWFS+LOWFCS without explicitly evaluating the end-to-end model at numerous time steps.
NASA Astrophysics Data System (ADS)
Kirstein, O.; Prager, M.; Grimm, H.; Buchsteiner, A.; Wischnewski, A.
2007-09-01
Quasielastic neutron scattering experiments were carried out using the multichopper time-of-flight spectrometer V3 at the Hahn-Meitner Institut, Germany and the backscattering spectrometer at Forschungszentrum Jülich, Germany. Activation energies for CH3X, X =F, Cl, Br, and I, were obtained. In combination with results from previous inelastic neutron scattering experiments the data were taken to describe the dynamics of the halides in terms of two different models, the single particle model and the coupling model. Coupled motions of methyl groups seem to explain the dynamics of the methyl fluoride and chloride; however, the coupling vanishes with the increase of the mass of the halide atom in CH3Br and CH3I.
NASA Astrophysics Data System (ADS)
Lüdde, Hans Jürgen; Achenbach, Alexander; Kalkbrenner, Thilo; Jankowiak, Hans-Christian; Kirchner, Tom
2016-04-01
A new model to account for geometric screening corrections in an independent-atom-model description of ion-molecule collisions is introduced. The ion-molecule cross sections for net capture and net ionization are represented as weighted sums of atomic cross sections with weight factors that are determined from a geometric model of overlapping cross section areas. Results are presented for proton collisions with targets ranging from diatomic to complex polyatomic molecules. Significant improvement compared to simple additivity rule results and in general good agreement with experimental data are found. The flexibility of the approach opens up the possibility to study more detailed observables such as orientation-dependent and charge-state-correlated cross sections for a large class of complex targets ranging from biomolecules to atomic clusters.
Hybrid Model for Plasma Thruster Plume Simulation Including PIC-MCC Electrons Treatment
Alexandrov, A. L.; Bondar, Ye. A.; Schweigert, I. V.
2008-12-31
The simulation of stationary plasma thruster plume is important for spacecraft design due to possible interaction plume with spacecraft surface. Such simulations are successfully performed using the particle-in-cell technique for describing the motion of charged particles, namely the propellant ions. In conventional plume models the electrons are treated using various fluid approaches. In this work, we suggest an alternative approach, where the electron kinetics is considered 'ab initio', using the particle-in-cell--Monte Carlo collision method. To avoid the large computational expenses due to small time steps, the relaxation of simulated plume plasma is split into the fast relaxation of the electrons distribution function and the slow one of the ions. The model is self-consistent but hybrid, since the simultaneous electron and ion motion is not really modeled. The obtained electron temperature profile is in good agreement with experiment.
European air quality modelled by CAMx including the volatility basis set scheme
NASA Astrophysics Data System (ADS)
Ciarelli, G.; Aksoyoglu, S.; Crippa, M.; Jimenez, J. L.; Nemitz, E.; Sellegri, K.; Äijälä, M.; Carbone, S.; Mohr, C.; O'Dowd, C.; Poulain, L.; Baltensperger, U.; Prévôt, A. S. H.
2015-12-01
Four periods of EMEP (European Monitoring and Evaluation Programme) intensive measurement campaigns (June 2006, January 2007, September-October 2008 and February-March 2009) were modelled using the regional air quality model CAMx with VBS (Volatility Basis Set) approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February-March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosols (OA). Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database Airbase. Sulfur dioxide (SO2) and ozone (O3) were found to be overestimated for all the four periods with O3 having the largest mean bias during June 2006 and January-February 2007 periods (8.93 and 12.30 ppb mean biases, respectively). In contrast, nitrogen dioxide (NO2) and carbon monoxide (CO) were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 very well for all the four periods with average biases ranging from -2.13 to 1.04 μg m-3. Comparisons with AMS (Aerosol Mass Spectrometer) measurements at different sites in Europe during February-March 2009, showed that in general the model over-predicts the inorganic aerosol fraction and under-predicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of volatility basis set scheme (VBS) on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February-March 2009 the chamber-case reduced the total OA concentrations by about 43 % on average. On the other hand, a test based on ambient measurement data increased OA concentrations by about 47 % for the same period bringing model
Three-dimensional time domain model of lightning including corona effects
NASA Technical Reports Server (NTRS)
Podgorski, Andrew S.
1991-01-01
A new 3-D lightning model that incorporates the effect of corona is described for the first time. The new model is based on a Thin Wire Time Domain Lightning (TWTDL) Code developed previously. The TWTDL Code was verified during the 1985 and 1986 lightning seasons by the measurements conducted at the 553 m CN Tower in Toronto, Ontario. The inclusion of corona in the TWTDL code allowed study of the corona effects on the lightning current parameters and the associated electric field parameters.
Modeling of damage in ductile cast iron - The effect of including plasticity in the graphite nodules
NASA Astrophysics Data System (ADS)
Andriollo, T.; Thorborg, J.; Tiedje, N. S.; Hattel, J.
2015-06-01
In the present paper a micro-mechanical model for investigating the stress-strain relation of ductile cast iron subjected to simple loading conditions is presented. The model is based on a unit cell containing a single spherical graphite nodule embedded in a uniform ferritic matrix, under the assumption of infinitesimal strains and plane-stress conditions. Despite the latter being a limitation with respect to full 3D models, it allows a direct comparison with experimental investigations of damage evolution on the surface of ductile cast iron components, where the stress state is biaxial in nature. In contrast to previous works on the subject, the material behaviour in both matrix and nodule is assumed to be elasto-plastic, described by the classical J2-flow theory of plasticity, and damage evolution in the matrix is taken into account via Lemaitre's isotropic model. The effects of residual stresses due to the cooling process during manufacturing are also considered. Numerical solutions are obtained using an in-house developed finite element code; proper comparison with literature in the field is given.
Dusty Plasma Modeling of the Fusion Reactor Sheath Including Collisional-Radiative Effects
Dezairi, Aouatif; Samir, Mhamed; Eddahby, Mohamed; Saifaoui, Dennoun; Katsonis, Konstantinos; Berenguer, Chloe
2008-09-07
The structure and the behavior of the sheath in Tokamak collisional plasmas has been studied. The sheath is modeled taking into account the presence of the dust{sup 2} and the effects of the charged particle collisions and radiative processes. The latter may allow for optical diagnostics of the plasma.
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes.
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel
2015-01-01
A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 gVS/Ld. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes. PMID:25458761
Ng, Jonathan; Huang, Yi-Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-01
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Recently, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results. (C) 2015 AIP Publishing LLC.
NASA Astrophysics Data System (ADS)
Stevens, Richard; Gayme, Dennice; Meyers, Johan; Meneveau, Charles
2015-11-01
We present results from large eddy simulations (LES) of wind farms consisting of tens to hundreds of turbines with respective streamwise and spanwise spacings approaching 35 and 12 turbine diameters. Even in staggered farms where the distance between consecutive turbines in the flow direction is more than 50 turbine diameters, we observe visible wake effects. In aligned farms, the performance of the turbines in the fully developed regime, where the power output as function of the downstream position becomes constant, is shown to primarily depend on the streamwise distance between consecutive turbine rows. However, for other layouts the power production in the fully developed regime mainly depends on the geometrical mean turbine spacing (inverse turbine density). These findings agree very well with predictions from our recently developed coupled wake boundary layer (CWBL) model, which introduces a two way coupling between the wake (Jensen) and top-down model approaches (Stevens et al. JRSE 7, 023115, 2015). To further validate the CWBL model we apply it to the problem of determining the optimal wind turbine thrust coefficient for power maximization over the entire farm. The CWBL model predictions agree very well with recent LES results (Goit & Meyers, JFM 768, 5-50, 2015). FOM Fellowships for Young Energy Scientists (YES!), NSF (IIA 1243482, the WINDINSPIRE project), ERC (FP7-Ideas, 306471).
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.
Ng, Jonathan; Huang, Yi-Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-15
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Recently, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.
NASA Astrophysics Data System (ADS)
Tse, Y. C.; Chan, Chris K. P.; Luk, M. H.; Kwong, N. H.; Leung, P. T.; Binder, R.; Schumacher, Stefan
2015-08-01
We present a detailed study of a low-dimensional population-competition (PC) model suitable for analysis of the dynamics of certain modulational instability patterns in extended systems. The model is applied to analyze the transverse optical exciton-polariton patterns in semiconductor quantum well microcavities. It is shown that, despite its simplicity, the PC model describes quite well the competitions among various two-spot and hexagonal patterns when four physical parameters, representing density saturation, hexagon stabilization, anisotropy, and switching beam intensity, are varied. The combined effects of the last three parameters are given detailed considerations here. Although the model is developed in the context of semiconductor polariton patterns, its equations have more general applicability, and the results obtained here may benefit the investigation of other pattern-forming systems. The simplicity of the PC model allows us to organize all steady state solutions in a parameter space ‘phase diagram’. Each region in the phase diagram is characterized by the number and type of solutions. The main numerical task is to compute inter-region boundary surfaces, where some steady states either appear, disappear, or change their stability status. The singularity types of the boundary points, given by Catastrophe theory, are shown to provide a simple geometric overview of the boundary surfaces. With all stable and unstable steady states and the phase boundaries delimited and characterized, we have attained a comprehensive understanding of the structure of the four-parameter phase diagram. We analyze this rich structure in detail and show that it provides a transparent and organized interpretation of competitions among various patterns built on the hexagonal state space.
ERIC Educational Resources Information Center
Hocking, Matthew C.; Lochman, John E.
2005-01-01
This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…
A catchment-scale groundwater model including sewer pipe leakage in an urban system
NASA Astrophysics Data System (ADS)
Peche, Aaron; Fuchs, Lothar; Spönemann, Peter; Graf, Thomas; Neuweiler, Insa
2016-04-01
Keywords: pipe leakage, urban hydrogeology, catchment scale, OpenGeoSys, HYSTEM-EXTRAN Wastewater leakage from subsurface sewer pipe defects leads to contamination of the surrounding soil and groundwater (Ellis, 2002; Wolf et al., 2004). Leakage rates at pipe defects have to be known in order to quantify contaminant input. Due to inaccessibility of subsurface pipe defects, direct (in-situ) measurements of leakage rates are tedious and associated with a high degree of uncertainty (Wolf, 2006). Proposed catchment-scale models simplify leakage rates by neglecting unsaturated zone flow or by reducing spatial dimensions (Karpf & Krebs, 2013, Boukhemacha et al., 2015). In the present study, we present a physically based 3-dimensional numerical model incorporating flow in the pipe network, in the saturated zone and in the unsaturated zone to quantify leakage rates on the catchment scale. The model consists of the pipe network flow model HYSTEM-EXTAN (itwh, 2002), which is coupled to the subsurface flow model OpenGeoSys (Kolditz et al., 2012). We also present the newly developed coupling scheme between the two flow models. Leakage functions specific to a pipe defect are derived from simulations of pipe leakage using spatially refined grids around pipe defects. In order to minimize computational effort, these leakage functions are built into the presented numerical model using unrefined grids around pipe defects. The resulting coupled model is capable of efficiently simulating spatially distributed pipe leakage coupled with subsurficial water flow in a 3-dimensional environment. References: Boukhemacha, M. A., Gogu, C. R., Serpescu, I., Gaitanaru, D., & Bica, I. (2015). A hydrogeological conceptual approach to study urban groundwater flow in Bucharest city, Romania. Hydrogeology Journal, 23(3), 437-450. doi:10.1007/s10040-014-1220-3. Ellis, J. B., & Revitt, D. M. (2002). Sewer losses and interactions with groundwater quality. Water Science and Technology, 45(3), 195
Cultural Adjustment and the Puerto Rican.
ERIC Educational Resources Information Center
Prewitt-Diaz, Joseph O.
This review of the literature on cultural adjustment is divided into four sections: the nature of cultural adjustment; acculturation as a model of cultural adjustment; psychological responses to acculturation; and a model of cultural adjustment developed by the author as a result of his immigration from Puerto Rico to the United States mainland.…
A model for thermal oxidation of Si and SiC including material expansion
Christen, T. Ioannidis, A.; Winkelmann, C.
2015-02-28
A model based on drift-diffusion-reaction kinetics for Si and SiC oxidation is discussed, which takes the material expansion into account with an additional convection term. The associated velocity field is determined self-consistently from the local reaction rate. The approach allows a calculation of the densities of volatile species in an nm-resolution at the oxidation front. The model is illustrated with simulation results for the growth and impurity redistribution during Si oxidation and for carbon and silicon emission during SiC oxidation. The approach can be useful for the prediction of Si and/or C interstitial distribution, which is particularly relevant for the quality of metal-oxide-semiconductor electronic devices.
A kinematic eddy viscosity model including the influence of density variations and preturbulence
NASA Technical Reports Server (NTRS)
Cohen, L. S.
1973-01-01
A model for the kinematic eddy viscosity was developed which accounts for the turbulence produced as a result of jet interactions between adjacent streams as well as the turbulence initially present in the streams. In order to describe the turbulence contribution from jet interaction, the eddy viscosity suggested by Prandtl was adopted, and a modification was introduced to account for the effect of density variation through the mixing layer. The form of the modification was ascertained from a study of the compressible turbulent boundary layer on a flat plate. A kinematic eddy viscosity relation which corresponds to the initial turbulence contribution was derived by employing arguments used by Prandtl in his mixing length hypothesis. The resulting expression for self-preserving flow is similar to that which describes the mixing of a submerged jet. Application of the model has led to analytical predictions which are in good agreement with available turbulent mixing experimental data.
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Dahl, Milo D.
1987-01-01
A finite element model was developed to solve for the acoustic pressure field in a nonhomogeneous region. The derivations from the governing equations assumed that the material properties could vary with position resulting in a nonhomogeneous variable property two-dimensional wave equation. This eliminated the necessity of finding the boundary conditions between the different materials. For a two media region consisting of part air (in the duct) and part bulk absorber (in the wall), a model was used to describe the bulk absorber properties in two directions. An experiment to verify the numerical theory was conducted in a rectangular duct with no flow and absorbing material mounted on one wall. Changes in the sound field, consisting of planar waves, was measured on the wall opposite the absorbing material. As a function of distance along the duct, fairly good agreement was found in the standing wave pattern upstream of the absorber and in the decay of pressure level opposite the absorber.
NASA Astrophysics Data System (ADS)
Lüdde, H. J.; Achenbach, A.; Kalkbrenner, T.; Jankowiak, H. C.; Kirchner, T.
2016-05-01
A recently introduced model to account for geometric screening corrections in an independent-atom-model description of ion-molecule collisions is applied to proton collisions from amino acids and DNA and RNA nucleobases. The correction coefficients are obtained from using a pixel counting method (PCM) for the exact calculation of the effective cross sectional area that emerges when the molecular cross section is pictured as a structure of (overlapping) atomic cross sections. This structure varies with the relative orientation of the molecule with respect to the projectile beam direction and, accordingly, orientation-independent total cross sections are obtained from averaging the pixel count over many orientations. We present net capture and net ionization cross sections over wide ranges of impact energy and analyze the strength of the screening effect by comparing the PCM results with Bragg additivity rule cross sections and with experimental data where available. Work supported by NSERC, Canada.
Numerical modelling of the transport of trace gases including methane in the subsurface of Mars
NASA Astrophysics Data System (ADS)
Stevens, Adam H.; Patel, Manish R.; Lewis, Stephen R.
2015-04-01
We model the transport of gas through the martian subsurface in order to quantify the timescales of release of a trace gas with a source at depth using a Fickian model of diffusion through a putative martian regolith column. The model is then applied to the case of methane to determine if diffusive transport of gas can explain previous observations of methane in the martian atmosphere. We investigate which parameters in the model have the greatest effect on transport timescales and show that the calculated diffusivity is very sensitive to the pressure profile of the subsurface, but relatively insensitive to the temperature profile, though diffusive transport may be affected by other temperature dependent properties of the subsurface such as the local vapour pressure. Uncertainties in the structure and physical conditions of the martian subsurface also introduce uncertainties in the timescales calculated. It was found that methane may take several hundred thousand Mars-years to diffuse from a source at depth. Purely diffusive transport cannot explain transient release that varies on timescales of less than one martian year from sources such as serpentinization or methanogenic organisms at depths of more than 2 km. However, diffusion of gas released by the destabilisation of methane clathrate hydrates close to the surface, for example caused by transient mass wasting events or erosion, could produce a rapidly varying flux of methane into the atmosphere of more than 10-3 kg m-2 s-1 over a duration of less than half a martian year, consistent with observations of martian methane variability. Seismic events, magmatic intrusions or impacts could also potentially produce similar patterns of release, but are far more complex to simulate.
NASA Technical Reports Server (NTRS)
Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.
2014-01-01
A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.
The extension of a uniform canopy reflectance model to include row effects
NASA Technical Reports Server (NTRS)
Suits, G. H. (Principal Investigator)
1981-01-01
The effect of row structure is assumed to be caused by the variation in density of vegetation across rows rather than to a profile in canopy height. The calculation of crop reflectance using vegetation density modulation across rows follows a parallel procedure to that for a uniform canopy. Predictions using the row model for wheat show that the effect of changes in sun to row azimuth are greatest in Landsat Band 5 (red band) and can result in underestimation of crop vigor.
Complete spectral analysis of the Jackiw-Rebbi model, including its zero mode
NASA Astrophysics Data System (ADS)
Charmchi, F.; Gousheh, S. S.
2014-01-01
In this paper we present a complete and exact spectral analysis of the (1+1)-dimensional model that Jackiw and Rebbi considered to show that the half-integral fermion numbers are possible due to the presence of an isolated self-charge-conjugate zero mode. The model possesses the charge and particle conjugation symmetries. These symmetries mandate the reflection symmetry of the spectrum about the line E=0. We obtain the bound-state energies and wave functions of the fermion in this model using two different methods, analytically and exactly, for every arbitrary choice of the parameters of the kink, i.e. its value at spatial infinity (θ0) and its scale of variations (μ). Then, we plot the bound-state energies of the fermion as a function of θ0. This graph enables us to consider a process of building up the kink from the trivial vacuum. We can then determine the origin and evolution of the bound-state energy levels during this process. We see that the model has a dynamical mass generation process at the first quantized level and the zero-energy fermionic mode responsible for the fractional fermion number, is always present during the construction of the kink and its origin is very peculiar, indeed. We also observe that, as expected, none of the energy levels cross one another. Moreover, we obtain analytically the continuum scattering wave functions of the fermion and then calculate the phase shifts of these wave functions. Using the information contained in the graphs of the phase shifts and the bound states, we show that our phase shifts are consistent with the weak and strong forms of the Levinson theorem. Finally, using the weak form of the Levinson theorem, we confirm that the number of the zero-energy fermionic modes is exactly one.
Including network knowledge into Cox regression models for biomarker signature discovery.
Fröhlich, Holger
2014-03-01
Discovery of prognostic and diagnostic biomarker gene signatures for diseases, such as cancer, is seen as a major step toward a better personalized medicine. During the last decade various methods have been proposed for that purpose. However, one important obstacle for making gene signatures a standard tool in clinical diagnosis is the typical low reproducibility of these signatures combined with the difficulty to achieve a clear biological interpretation. For that purpose in the last years there has been a growing interest in approaches that try to integrate information from molecular interaction networks. Most of these methods focus on classification problems, that is learn a model from data that discriminates patients into distinct clinical groups. Far less has been published on approaches that predict a patient's event risk. In this paper, we investigate eight methods that integrate network information into multivariable Cox proportional hazard models for risk prediction in breast cancer. We compare the prediction performance of our tested algorithms via cross-validation as well as across different datasets. In addition, we highlight the stability and interpretability of obtained gene signatures. In conclusion, we find GeneRank-based filtering to be a simple, computationally cheap and highly predictive technique to integrate network information into event time prediction models. Signatures derived via this method are highly reproducible. PMID:24430933
Numerical Modeling of the Surface Fatigue Crack Propagation Including the Closure Effect
NASA Astrophysics Data System (ADS)
Guchinsky, Ruslan; Petinov, Sergei
2016-01-01
Presently modeling of surface fatigue crack growth for residual life assessment of structural elements is almost entirely based on application of the Linear Elastic Fracture Mechanics (LEFM). Generally, it is assumed that the crack front does not essentially change its shape, although it is not always confirmed by experiment. Furthermore, LEFM approach cannot be applied when the stress singularity vanishes due to material plasticity, one of the leading factors associated with the material degradation and fracture. Also, evaluation of stress intensity factors meets difficulties associated with changes in the stress state along the crack front circumference. An approach proposed for simulation the evolution of surface cracks based on application of the Strain-life criterion for fatigue failure and of the finite element modeling of damage accumulation. It takes into account the crack closure effect, the nonlinear behavior of damage accumulation and material compliance increasing due to the damage advance. The damage accumulation technique was applied to model the semi-elliptical crack growth from the initial defect in the steel compact specimen. The results of simulation are in good agreement with the published experimental data.
NASA Astrophysics Data System (ADS)
Hong, Sung-Kwon; Epureanu, Bogdan I.; Castanier, Matthew P.
2014-09-01
The goal of this work is to develop a numerical model for the vibration of hybrid electric vehicle (HEV) battery packs to enable probabilistic forced response simulations for the effects of variations. There are two important types of variations that affect their structural response significantly: the prestress that is applied when joining the cells within a pack; and the small, random structural property discrepancies among the cells of a battery pack. The main contributions of this work are summarized as follows. In order to account for these two important variations, a new parametric reduced order model (PROM) formulation is derived by employing three key observations: (1) the stiffness matrix can be parameterized for different levels of prestress, (2) the mode shapes of a battery pack with cell-to-cell variation can be represented as a linear combination of the mode shapes of the nominal system, and (3) the frame holding each cell has vibratory motion. A numerical example of an academic battery pack with pouch cells is presented to demonstrate that the PROM captures the effects of both prestress and structural variation on battery packs. The PROM is validated numerically by comparing full-order finite element models (FEMs) of the same systems.
Kinetic model of water disinfection using peracetic acid including synergistic effects.
Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D
2016-01-01
The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors. PMID:26819382
A stepped leader model for lightning including charge distribution in branched channels
Shi, Wei; Zhang, Li; Li, Qingmin
2014-09-14
The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.
Gall, Elliott T; Siegel, Jeffrey A; Corsi, Richard L
2015-04-01
We develop an ozone transport and reaction model to determine reaction probabilities and assess the importance of physical properties such as porosity, pore diameter, and material thickness on reactive uptake of ozone to five materials. The one-dimensional model accounts for molecular diffusion from bulk air to the air-material interface, reaction at the interface, and diffusive transport and reaction through material pore volumes. Material-ozone reaction probabilities that account for internal transport and internal pore area, γ(ipa), are determined by a minimization of residuals between predicted and experimentally derived ozone concentrations. Values of γ(ipa) are generally less than effective reaction probabilities (γ(eff)) determined previously, likely because of the inclusion of diffusion into substrates and reaction with internal surface area (rather than the use of the horizontally projected external material areas). Estimates of γ(ipa) average 1 × 10(-7), 2 × 10(-7), 4 × 10(-5), 2 × 10(-5), and 4 × 10(-7) for two types of cellulose paper, pervious pavement, Portland cement concrete, and an activated carbon cloth, respectively. The transport and reaction model developed here accounts for observed differences in ozone removal to varying thicknesses of the cellulose paper, and estimates a near constant γ(ipa) as material thickness increases from 0.02 to 0.16 cm. PMID:25748309
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
NASA Astrophysics Data System (ADS)
Pop, Eric; Dutton, Robert W.; Goodson, Kenneth E.
2004-11-01
We describe the implementation of a Monte Carlo model for electron transport in silicon. The model uses analytic, nonparabolic electron energy bands, which are computationally efficient and sufficiently accurate for future low-voltage (<1V) nanoscale device applications. The electron-lattice scattering is incorporated using an isotropic, analytic phonon-dispersion model, which distinguishes between the optical/acoustic and the longitudinal/transverse phonon branches. We show that this approach avoids introducing unphysical thresholds in the electron distribution function, and that it has further applications in computing detailed phonon generation spectra from Joule heating. A set of deformation potentials for electron-phonon scattering is introduced and shown to yield accurate transport simulations in bulk silicon across a wide range of electric fields and temperatures. The shear deformation potential is empirically determined at Ξu=6.8eV, and consequently, the isotropically averaged scattering potentials with longitudinal and transverse acoustic phonons are DLA=6.39eV and DTA=3.01eV, respectively, in reasonable agreement with previous studies. The room-temperature electron mobility in strained silicon is also computed and shown to be in better agreement with the most recent phonon-limited data available. As a result, we find that electron coupling with g-type phonons is about 40% lower, and the coupling with f-type phonons is almost twice as strong as previously reported.
ERIC Educational Resources Information Center
Punamaki, Raija-Leena; Qouta, Samir; El Sarraj, Eyad
1997-01-01
Used path analysis to examine relations between trauma, perceived parenting, resources, political activity, and adjustment in Palestinian 11- and 12-year olds. Found that the more trauma experienced, the more negative parenting the children experienced, the more political activity they showed, and the more they suffered from adjustment problems.…
ERIC Educational Resources Information Center
Liew, Jeffrey; Johnson, Audrea Y.; Smith, Tracy R.; Thoemmes, Felix
2011-01-01
Research Findings: Parental expressivity, child physiological regulation (indexed by respiratory sinus arrhythmia suppression), child behavioral regulation, and child adjustment outcomes were examined in 45 children (M age = 4.32 years, SD = 1.30) and their parents. With the exception of child adjustment (i.e., internalizing and externalizing…
Medical Adjustment Services for the Severely Handicapped
ERIC Educational Resources Information Center
Carter, R. Edward
1978-01-01
Management of a spinal cord injury is used as a model to discuss the medical adjustment problems occurring with severe physical handicaps. Topics include the stages of preadmission/admission rehabilitation evaluation, comprehensive rehabilitation treatment, patient communication, patient and family conference, and discharge and follow-up. This…
Extending Galactic Habitable Zone Modeling to Include the Emergence of Intelligent Life.
Morrison, Ian S; Gowanlock, Michael G
2015-08-01
Previous studies of the galactic habitable zone have been concerned with identifying those regions of the Galaxy that may favor the emergence of complex life. A planet is deemed habitable if it meets a set of assumed criteria for supporting the emergence of such complex life. In this work, we extend the assessment of habitability to consider the potential for life to further evolve to the point of intelligence--termed the propensity for the emergence of intelligent life, φI. We assume φI is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the sterilizing effects of nearby supernovae. The times between supernova events provide windows of opportunity for the evolution of intelligence. We developed a model that allows us to analyze these window times to generate a metric for φI, and we examine here the spatial and temporal variation of this metric. Even under the assumption that long time durations are required between sterilizations to allow for the emergence of intelligence, our model suggests that the inner Galaxy provides the greatest number of opportunities for intelligence to arise. This is due to the substantially higher number density of habitable planets in this region, which outweighs the effects of a higher supernova rate in the region. Our model also shows that φI is increasing with time. Intelligent life emerged at approximately the present time at Earth's galactocentric radius, but a similar level of evolutionary opportunity was available in the inner Galaxy more than 2 Gyr ago. Our findings suggest that the inner Galaxy should logically be a prime target region for searches for extraterrestrial intelligence and that any civilizations that may have emerged there are potentially much older than our own. PMID:26274865
NASA Astrophysics Data System (ADS)
Weber, James Daniel
1999-11-01
This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is
NASA Technical Reports Server (NTRS)
Berglund, Judith
2007-01-01
Approximately 2-3 billion metric tons of soil dust are estimated to be transported in the Earth's atmosphere each year. Global transport of desert dust is believed to play an important role in many geochemical, climatological, and environmental processes. This dust carries minerals and nutrients, but it has also been shown to carry pollutants and viable microorganisms capable of harming human, animal, plant, and ecosystem health. Saharan dust, which impacts the eastern United States (especially Florida and the southeast) and U.S. Territories in the Caribbean primarily during the summer months, has been linked to increases in respiratory illnesses in this region and has been shown to carry other human, animal, and plant pathogens. For these reasons, this candidate solution recommends integrating Saharan dust distribution and concentration forecasts from the NASA GOCART global dust cycle model into a public health DSS (decision support system), such as the CDC's (Centers for Disease Control and Prevention's) EPHTN (Environmental Public Health Tracking Network), for the eastern United States and Caribbean for early warning purposes regarding potential increases in respiratory illnesses or asthma attacks, potential disease outbreaks, or bioterrorism. This candidate solution pertains to the Public Health National Application but also has direct connections to Air Quality and Homeland Security. In addition, the GOCART model currently uses the NASA MODIS aerosol product as an input and uses meteorological forecasts from the NASA GEOS-DAS (Goddard Earth Observing System Data Assimilation System) GEOS-4 AGCM. In the future, VIIRS aerosol products and perhaps CALIOP aerosol products could be assimilated into the GOCART model to improve the results.
Double pendulum model for a tennis stroke including a collision process
NASA Astrophysics Data System (ADS)
Youn, Sun-Hyun
2015-10-01
By means of adding a collision process between the ball and racket in the double pendulum model, we analyzed the tennis stroke. The ball and the racket system may be accelerated during the collision time; thus, the speed of the rebound ball does not simply depend on the angular velocity of the racket. A higher angular velocity sometimes gives a lower rebound ball speed. We numerically showed that the proper time-lagged racket rotation increased the speed of the rebound ball by 20%. We also showed that the elbow should move in the proper direction in order to add the angular velocity of the racket.
NASA Technical Reports Server (NTRS)
Johnson, W.
1974-01-01
An analytical model is developed for proprotor aircraft dynamics. The rotor model includes coupled flap-lag bending modes, and blade torsion degrees of freedom. The rotor aerodynamic model is generally valid for high and low inflow, and for axial and nonaxial flight. For the rotor support, a cantilever wing is considered; incorporation of a more general support with this rotor model will be a straight-forward matter.
NASA Astrophysics Data System (ADS)
Donelan, M. A.; Soloviev, A. V.
2016-05-01
A mixing length model for air-water gas transfer is developed to include the effects of wave breaking. The model requires both the shear velocity induced by the wind and the integrated wave dissipation. Both of these can be calculated for tanks and oceans by a full spectrum wave model. The gas transfer model is calibrated, with laboratory tank measurements of carbon dioxide flux, and transported to oceanic conditions to yield air-sea transfer velocity versus wind speed.
NASA Astrophysics Data System (ADS)
Janssen, Gijs M. C. M.; Valstar, Johan R.; van der Zee, Sjoerd E. A. T. M.
2008-02-01
Traveltime determinations have found increasing application in the characterization of groundwater systems. No algorithms are available, however, to optimally design sampling strategies including this information type. We propose a first-order methodology to include groundwater age or tracer arrival time determinations in measurement network design and apply the methodology in an illustrative example in which the network design is directed at contaminant breakthrough uncertainty minimization. We calculate linearized covariances between potential measurements and the goal variables of which we want to reduce the uncertainty: the groundwater age at the control plane and the breakthrough locations of the contaminant. We assume the traveltime to be lognormally distributed and therefore logtransform the age determinations in compliance with the adopted Bayesian framework. Accordingly, we derive expressions for the linearized covariances between the transformed age determinations and the parameters and states. In our synthetic numerical example, the derived expressions are shown to provide good first-order predictions of the variance of the natural logarithm of groundwater age if the variance of the natural logarithm of the conductivity is less than 3.0. The calculated covariances can be used to predict the posterior breakthrough variance belonging to a candidate network before samples are taken. A Genetic Algorithm is used to efficiently search, among all candidate networks, for a near-optimal one. We show that, in our numerical example, an age estimation network outperforms (in terms of breakthrough uncertainty reduction) equally sized head measurement networks and conductivity measurement networks even if the age estimations are highly uncertain.
A model of plasma current through a hole of Rogowski probe including sheath effects
NASA Astrophysics Data System (ADS)
Furui, H.; Ejiri, A.; Nagashima, Y.; Takase, Y.; Sonehara, M.; Tsujii, N.; Yamaguchi, T.; Shinya, T.; Togashi, H.; Homma, H.; Nakamura, K.; Takeuchi, T.; Yajima, S.; Yoshida, Y.; Toida, K.; Takahashi, W.; Yamazaki, H.
2016-04-01
In TST-2 Ohmic discharges, local current is measured using a Rogowski probe by changing the angle between the local magnetic field and the direction of the hole of the Rogowski probe. The angular dependence shows a peak when the direction of the hole is almost parallel to the local magnetic field. The obtained width of the peak was broader than that of the theoretical curve expected from the probe geometry. In order to explain this disagreement, we consider the effect of sheath in the vicinity of the Rogowski probe. A sheath model was constructed and electron orbits were numerically calculated. From the calculation, it was found that the electron orbit is affected by E × B drift due to the sheath electric field. Such orbit causes the broadening of the peak in the angular dependence and the dependence agrees with the experimental results. The dependence of the broadening on various plasma parameters was studied numerically and explained qualitatively by a simplified analytical model.
Regner, K. T.; Wei, L. C.; Malen, J. A.
2015-12-21
We develop a solution to the two-temperature diffusion equation in axisymmetric cylindrical coordinates to model heat transport in thermoreflectance experiments. Our solution builds upon prior solutions that account for two-channel diffusion in each layer of an N-layered geometry, but adds the ability to deposit heat at any location within each layer. We use this solution to account for non-surface heating in the transducer layer of thermoreflectance experiments that challenge the timescales of electron-phonon coupling. A sensitivity analysis is performed to identify important parameters in the solution and to establish a guideline for when to use the two-temperature model to interpret thermoreflectance data. We then fit broadband frequency domain thermoreflectance (BB-FDTR) measurements of SiO{sub 2} and platinum at a temperature of 300 K with our two-temperature solution to parameterize the gold/chromium transducer layer. We then refit BB-FDTR measurements of silicon and find that accounting for non-equilibrium between electrons and phonons in the gold layer does lessen the previously observed heating frequency dependence reported in Regner et al. [Nat. Commun. 4, 1640 (2013)] but does not completely eliminate it. We perform BB-FDTR experiments on silicon with an aluminum transducer and find limited heating frequency dependence, in agreement with time domain thermoreflectance results. We hypothesize that the discrepancy between thermoreflectance measurements with different transducers results in part from spectrally dependent phonon transmission at the transducer/silicon interface.
Usmanov, Arcadi V.; Matthaeus, William H.; Goldstein, Melvyn L.
2012-07-20
To study the effects of interstellar pickup protons and turbulence on the structure and dynamics of the solar wind, we have developed a fully three-dimensional magnetohydrodynamic solar wind model that treats interstellar pickup protons as a separate fluid and incorporates the transport of turbulence and turbulent heating. The governing system of equations combines the mean-field equations for the solar wind plasma, magnetic field, and pickup protons and the turbulence transport equations for the turbulent energy, normalized cross-helicity, and correlation length. The model equations account for photoionization of interstellar hydrogen atoms and their charge exchange with solar wind protons, energy transfer from pickup protons to solar wind protons, and plasma heating by turbulent dissipation. Separate mass and energy equations are used for the solar wind and pickup protons, though a single momentum equation is employed under the assumption that the pickup protons are comoving with the solar wind protons. We compute the global structure of the solar wind plasma, magnetic field, and turbulence in the region from 0.3 to 100 AU for a source magnetic dipole on the Sun tilted by 0 Degree-Sign -90 Degree-Sign and compare our results with Voyager 2 observations. The results computed with and without pickup protons are superposed to evaluate quantitatively the deceleration and heating effects of pickup protons, the overall compression of the magnetic field in the outer heliosphere caused by deceleration, and the weakening of corotating interaction regions by the thermal pressure of pickup protons.
Merchant, Thomas E. . E-mail: thomas.merchant@stjude.org; Kiehna, Erin N.; Li Chenghong; Shukla, Hemant; Sengupta, Saikat; Xiong Xiaoping; Gajjar, Amar; Mulhern, Raymond K.
2006-05-01
Purpose: Model the effects of radiation dosimetry on IQ among pediatric patients with central nervous system (CNS) tumors. Methods and Materials: Pediatric patients with CNS embryonal tumors (n = 39) were prospectively evaluated with serial cognitive testing, before and after treatment with postoperative, risk-adapted craniospinal irradiation (CSI) and conformal primary-site irradiation, followed by chemotherapy. Differential dose-volume data for 5 brain volumes (total brain, supratentorial brain, infratentorial brain, and left and right temporal lobes) were correlated with IQ after surgery and at follow-up by use of linear regression. Results: When the dose distribution was partitioned into 2 levels, both had a significantly negative effect on longitudinal IQ across all 5 brain volumes. When the dose distribution was partitioned into 3 levels (low, medium, and high), exposure to the supratentorial brain appeared to have the most significant impact. For most models, each Gy of exposure had a similar effect on IQ decline, regardless of dose level. Conclusions: Our results suggest that radiation dosimetry data from 5 brain volumes can be used to predict decline in longitudinal IQ. Despite measures to reduce radiation dose and treatment volume, the volume that receives the highest dose continues to have the greatest effect, which supports current volume-reduction efforts.
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2014-03-01
We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. PMID:24418347
NASA Technical Reports Server (NTRS)
Usmanov, Arcadi V.; Goldstein, Melvyn L.; Matthaeus, William H.
2012-01-01
To study the effects of interstellar pickup protons and turbulence on the structure and dynamics of the solar wind, we have developed a fully three-dimensional magnetohydrodynamic solar wind model that treats interstellar pickup protons as a separate fluid and incorporates the transport of turbulence and turbulent heating. The governing system of equations combines the mean-field equations for the solar wind plasma, magnetic field, and pickup protons and the turbulence transport equations for the turbulent energy, normalized cross-helicity, and correlation length. The model equations account for photoionization of interstellar hydrogen atoms and their charge exchange with solar wind protons, energy transfer from pickup protons to solar wind protons, and plasma heating by turbulent dissipation. Separate mass and energy equations are used for the solar wind and pickup protons, though a single momentum equation is employed under the assumption that the pickup protons are comoving with the solar wind protons.We compute the global structure of the solar wind plasma, magnetic field, and turbulence in the region from 0.3 to 100 AU for a source magnetic dipole on the Sun tilted by 0 deg - .90 deg and compare our results with Voyager 2 observations. The results computed with and without pickup protons are superposed to evaluate quantitatively the deceleration and heating effects of pickup protons, the overall compression of the magnetic field in the outer heliosphere caused by deceleration, and the weakening of corotating interaction regions by the thermal pressure of pickup protons.
The evolution of massive stars including mass loss - Presupernova models and explosion
NASA Technical Reports Server (NTRS)
Woosley, S. E.; Langer, Norbert; Weaver, Thomas A.
1993-01-01
The evolution of massive stars of 35, 40, 60, and 85 solar masses is followed through all stages of nuclear burning to the point of Fe core collapse. Critical nuclear reaction and mass-loss rates are varied. Efficient mass loss during the Wolf-Rayet (WR) stage is likely to lead to final masses as small as 4 solar masses. For a reasonable parameterization of the mass loss, there may be convergence of all WR stars, both single and in binaries, to a narrow band of small final masses. Our representative model, a 4.25 solar-mass WR presupernova derived from a 60 solar mass star, is followed through a simulated explosion, and its explosive nucleosynthesis and light curve are determined. Its properties are similar to those observed in Type Ib supernovae. The effects of the initial mass and mass loss on the presupernova structure of small mass WR models is also explored. Important properties of the presupernova star and its explosion can only be obtained by following the complete evolution starting on the main sequence.
An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models
NASA Technical Reports Server (NTRS)
Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.
2001-01-01
This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.
Modelling of metal vapour in pulsed TIG including influence of self-absorption
NASA Astrophysics Data System (ADS)
Iwao, Toru; Mori, Yusuke; Okubo, Masato; Sakai, Tadashi; Tashiro, Shinichi; Tanaka, Manabu; Yumoto, Motoshige
2010-11-01
Pulsed TIG (tungsten inert gas) welding is used to improve the stability and speed of arc welding, and to allow greater control over the heat input to the weld. The temperature and the radiation power density of the pulsed arc vary as a function of time, as does the distribution of metal vapour, and its effects on the arc. A self-consistent two-dimensional model of the arc and electrodes is used to calculate the properties of the arc as a function of time. Self-absorption of radiation is treated by two methods, one taking into account absorption of radiation only within the control volume of emission, and the other taking into account absorption throughout the plasma. The relation between metal vapour and radiation power density is analysed by calculating the iron vapour distribution. The results show that the transport of iron vapour is strongly affected by the fast convective flow during the peak current period. During the base current period, the region containing a low concentration of metal vapour expands because of the low convective flow. The iron vapour distribution does not closely follow the current pulses. The temperature, iron vapour and radiation power density distributions depend on the self-absorption model used. The temperature distribution becomes broader when self-absorption of radiation from all directions is considered.
NASA Astrophysics Data System (ADS)
Seiß, M.; Spahn, F.; Schmidt, Jürgen
2010-11-01
Saturn's rings host two known moons, Pan and Daphnis, which are massive enough to clear circumferential gaps in the ring around their orbits. Both moons create wake patterns at the gap edges by gravitational deflection of the ring material (Cuzzi, J.N., Scargle, J.D. [1985]. Astrophys. J. 292, 276-290; Showalter, M.R., Cuzzi, J.N., Marouf, E.A., Esposito, L.W. [1986]. Icarus 66, 297-323). New Cassini observations revealed that these wavy edges deviate from the sinusoidal waveform, which one would expect from a theory that assumes a circular orbit of the perturbing moon and neglects particle interactions. Resonant perturbations of the edges by moons outside the ring system, as well as an eccentric orbit of the embedded moon, may partly explain this behavior (Porco, C.C., and 34 colleagues [2005]. Science 307, 1226-1236; Tiscareno, M.S., Burns, J.A., Hedman, M.M., Spitale, J.N., Porco, C.C., Murray, C.D., and the Cassini Imaging team [2005]. Bull. Am. Astron. Soc. 37, 767; Weiss, J.W., Porco, C.C., Tiscareno, M.S., Burns, J.A., Dones, L. [2005]. Bull. Am. Astron. Soc. 37, 767; Weiss, J.W., Porco, C.C., Tiscareno, M.S. [2009]. Astron. J. 138, 272-286). Here we present an extended non-collisional streamline model which accounts for both effects. We describe the resulting variations of the density structure and the modification of the nonlinearity parameter q. Furthermore, an estimate is given for the applicability of the model. We use the streamwire model introduced by Stewart (Stewart, G.R. [1991]. Icarus 94, 436-450) to plot the perturbed ring density at the gap edges. We apply our model to the Keeler gap edges undulated by Daphnis and to a faint ringlet in the Encke gap close to the orbit of Pan. The modulations of the latter ringlet, induced by the perturbations of Pan (Burns, J.A., Hedman, M.M., Tiscareno, M.S., Nicholson, P.D., Streetman, B.J., Colwell, J.E., Showalter, M.R., Murray, C.D., Cuzzi, J.N., Porco, C.C., and the Cassini ISS team [2005]. Bull. Am
NASA Astrophysics Data System (ADS)
Bixio, A.; Gambolati, G.; Paniconi, C.; Putti, M.; Shestopalov, V.; Bublias, V.; Bohuslavsky, A.; Kasteltseva, N.; Rudenko, Y.
2002-06-01
Morphogenetic depressions or "dishes" in the Chernobyl exclusion zone play an important role in the transport of water and solutes (in particular the radionuclides 137Cs and 90Sr), functioning as accumulation basins and facilitating their transfer between the surface and subsurface via return flow (under conditions of high soil water saturation) and infiltration. From a digital elevation model (DEM) of the 112-km2 study area, 583 dishes (covering about 10% of the area) are identified and classified into four geometric types, ranging in size from 2,500 to 22,500 m2, and a with a maximum depth of 2 m. The collective influence of these depressions on the hydrology of the study basin is investigated with a coupled model of three-dimensional saturated and unsaturated subsurface flow and one-dimensional (along the rill or channel direction s) hill-slope and stream overland flow. Special attention is given to the handling of dishes, applying a "lake boundary-following" procedure in the topographic analysis, a level pool routing algorithm to simulate the storage and retardation effects of these reservoirs, and a higher hydraulic conductivity in the topmost 3 m of soil relative to non-dish cells in accordance with field observations. Modeling the interactions between the surface and subsurface hydrologic regimes requires careful consideration of the distinction between potential and actual atmospheric fluxes and their conversion to ponding, overland flow, and infiltration, and this coupling is described in some detail. Further consideration is given to the treatment of snow accumulation, snowmelt, and soil freezing and thawing processes, handled via linear and step function variations over the winter months in atmospheric boundary conditions and in upper soil hydraulic conductivities. A 1-year simulation of the entire watershed is used to analyze the water table response and, at the surface, the ponding heads and the infiltration/exfiltration fluxes. Saturation patterns and
Energy-based fatigue model for shape memory alloys including thermomechanical coupling
NASA Astrophysics Data System (ADS)
Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong
2016-03-01
This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.
ECO: A Generic Eutrophication Model Including Comprehensive Sediment-Water Interaction
Smits, Johannes G. C.; van Beek, Jan K. L.
2013-01-01
The content and calibration of the comprehensive generic 3D eutrophication model ECO for water and sediment quality is presented. Based on a computational grid for water and sediment, ECO is used as a tool for water quality management to simulate concentrations and mass fluxes of nutrients (N, P, Si), phytoplankton species, detrital organic matter, electron acceptors and related substances. ECO combines integral simulation of water and sediment quality with sediment diagenesis and closed mass balances. Its advanced process formulations for substances in the water column and the bed sediment were developed to allow for a much more dynamic calculation of the sediment-water exchange fluxes of nutrients as resulting from steep concentration gradients across the sediment-water interface than is possible with other eutrophication models. ECO is to more accurately calculate the accumulation of organic matter and nutrients in the sediment, and to allow for more accurate prediction of phytoplankton biomass and water quality in response to mitigative measures such as nutrient load reduction. ECO was calibrated for shallow Lake Veluwe (The Netherlands). Due to restoration measures this lake underwent a transition from hypertrophic conditions to moderately eutrophic conditions, leading to the extensive colonization by submerged macrophytes. ECO reproduces observed water quality well for the transition period of ten years. The values of its process coefficients are in line with ranges derived from literature. ECO’s calculation results underline the importance of redox processes and phosphate speciation for the nutrient return fluxes. Among other things, the results suggest that authigenic formation of a stable apatite-like mineral in the sediment can contribute significantly to oligotrophication of a lake after a phosphorus load reduction. PMID:23844160
Mahran, Yossra; Schueler, Robert; Weber, Marcel; Pizarro, Carmen; Nickenig, Georg; Skowasch, Dirk; Hammerstingl, Christoph
2016-01-01
AIM To find parameters from transthorathic echocardiography (TTE) including speckle-tracking (ST) analysis of the right ventricle (RV) to identify precapillary pulmonary hypertension (PH). METHODS Forty-four patients with suspected PH undergoing right heart catheterization (RHC) were consecutively included (mean age 63.1 ± 14 years, 61% male gender). All patients underwent standardized TTE including ST analysis of the RV. Based on the subsequent TTE-derived measurements, the presence of PH was assessed: Left ventricular ejection fraction (LVEF) was calculated by Simpsons rule from 4Ch. Systolic pulmonary artery pressure (sPAP) was assessed with continuous wave Doppler of systolic tricuspid regurgitant velocity and regarded raised with values ≥ 30 mmHg as a surrogate parameter for RA pressure. A concomitantly elevated PCWP was considered a means to discriminate between the precapillary and postcapillary form of PH. PCWP was considered elevated when the E/e’ ratio was > 12 as a surrogate for LV diastolic pressure. E/e’ ratio was measured by gauging systolic and diastolic velocities of the lateral and septal mitral valve annulus using TDI mode. The results were then averaged with conventional measurement of mitral valve inflow. Furthermore, functional testing with six minutes walking distance (6MWD), ECG-RV stress signs, NT pro-BNP and other laboratory values were assessed. RESULTS PH was confirmed in 34 patients (precapillary PH, n = 15, postcapillary PH, n = 19). TTE showed significant differences in E/e’ ratio (precapillary PH: 12.3 ± 4.4, postcapillary PH: 17.3 ± 10.3, no PH: 12.1 ± 4.5, P = 0.02), LV volumes (ESV: 25.0 ± 15.0 mL, 49.9 ± 29.5 mL, 32.2 ± 13.6 mL, P = 0.027; EDV: 73.6 ± 24.0 mL, 110.6 ± 31.8 mL, 87.8 ± 33.0 mL, P = 0.021) and systolic pulmonary arterial pressure (sPAP: 61.2 ± 22.3 mmHg, 53.6 ± 20.1 mmHg, 31.2 ± 24.6 mmHg, P = 0.001). STRV analysis showed significant differences for apical RV longitudinal strain (RVAS: -7.5% ± 5
Development of a new fertility prediction model for stallion semen, including flow cytometry.
Barrier Battut, I; Kempfer, A; Becker, J; Lebailly, L; Camugli, S; Chevrier, L
2016-09-01
Several laboratories routinely use flow cytometry to evaluate stallion semen quality. However, objective and practical tools for the on-field interpretation of data concerning fertilizing potential are scarce. A panel of nine tests, evaluating a large number of compartments or functions of the spermatozoa: motility, morphology, viability, mitochondrial activity, oxidation level, acrosome integrity, DNA integrity, "organization" of the plasma membrane, and hypoosmotic resistance, was applied to a population of 43 stallions, 33 of which showing widely differing fertilities (19%-84% pregnancy rate per cycle [PRC]). Analyses were performed either within 2 hours after semen collection or after 24-hour storage at 4 °C in INRA96 extender, on three to six ejaculates for each stallion. The aim was to provide data on the distribution of values among said population, showing within-stallion and between-stallion variability, and to determine whether appropriate combinations of tests could evaluate the fertilizing potential of each stallion. Within-stallion repeatability, defined as intrastallion correlation (r = between-stallion variance/total variance) ranged between 0.29 and 0.84 for "conventional" variables (viability, morphology, and motility), and between 0.15 and 0.81 for "cytometric" variables. Those data suggested that analyzing six ejaculates would be adequate to characterize a stallion. For most variables, except those related to DNA integrity and some motility variables, results differed significantly between immediately performed analyses and analyses performed after 24 hours at 4 °C. Two "best-fit" combinations of variables were determined. Factorial discriminant analysis using a first combination of seven variables, including the polarization of mitochondria, acrosome integrity, DNA integrity, and hypoosmotic resistance, permitted exact determination of the fertility group for each stallion: fertile, that is, PRC higher than 55%; intermediate, that is, 45
NASA Astrophysics Data System (ADS)
McLarty, Dustin Fogle
Distributed energy systems are a promising means by which to reduce both emissions and costs. Continuous generators must be responsive and highly efficiency to support building dynamics and intermittent on-site renewable power. Fuel cell -- gas turbine hybrids (FC/GT) are fuel-flexible generators capable of ultra-high efficiency, ultra-low emissions, and rapid power response. This work undertakes a detailed study of the electrochemistry, chemistry and mechanical dynamics governing the complex interaction between the individual systems in such a highly coupled hybrid arrangement. The mechanisms leading to the compressor stall/surge phenomena are studied for the increased risk posed to particular hybrid configurations. A novel fuel cell modeling method introduced captures various spatial resolutions, flow geometries, stack configurations and novel heat transfer pathways. Several promising hybrid configurations are analyzed throughout the work and a sensitivity analysis of seven design parameters is conducted. A simple estimating method is introduced for the combined system efficiency of a fuel cell and a turbine using component performance specifications. Existing solid oxide fuel cell technology is capable of hybrid efficiencies greater than 75% (LHV) operating on natural gas, and existing molten carbonate systems greater than 70% (LHV). A dynamic model is calibrated to accurately capture the physical coupling of a FC/GT demonstrator tested at UC Irvine. The 2900 hour experiment highlighted the sensitivity to small perturbations and a need for additional control development. Further sensitivity studies outlined the responsiveness and limits of different control approaches. The capability for substantial turn-down and load following through speed control and flow bypass with minimal impact on internal fuel cell thermal distribution is particularly promising to meet local demands or provide dispatchable support for renewable power. Advanced control and dispatch
NASA Astrophysics Data System (ADS)
Roy, Sankar Kumar; Roy, Banani
In this article, a prey-predator system with Holling type II functional response for the predator population including prey refuge region has been analyzed. Also a harvesting effort has been considered for the predator population. The density-dependent mortality rate for the prey, predator and super predator has been considered. The equilibria of the proposed system have been determined. Local and global stabilities for the system have been discussed. We have used the analytic approach to derive the global asymptotic stabilities of the system. The maximal predator per capita consumption rate has been considered as a bifurcation parameter to evaluate Hopf bifurcation in the neighborhood of interior equilibrium point. Also, we have used fishing effort to harvest predator population of the system as a control to develop a dynamic framework to investigate the optimal utilization of the resource, sustainability properties of the stock and the resource rent is earned from the resource. Finally, we have presented some numerical simulations to verify the analytic results and the system has been analyzed through graphical illustrations.
A simple computer model of pellet/cladding interaction including stress corrosion cracking
Yaung, J.Y.; Okrent, D.; Wazzan, A.R.
1985-12-01
Many unexpected failures, below design criteria, of light water reactor fuel cladding (Zircaloy) have been found during operational power ramps. Such a fuel rod failure can result from pellet/cladding mechanical interaction (PCMI) assisted by fission product stress corrosion cracking (SCC) of the Zircaloy tubing. A deterministic PCMI/SCC model has been coupled with the steady-state fuel behavior code, FRAPCON-2. The resulting code has been benchmarked (against few test cases but comprising many data points) but not fully verified. It is used to simulate two important occurrences: preramp (base) and power ramp irradiation. Because of the limitations of FRAPCON-2, the code is best suited to the simulation of mild power ramps with rates that do not exceed 0.02%/s. Computations with the code for greater power ramp rates, however, gave results which are not inconsistent with some overpower ramp experimental test results. Limited sensitivity studies are performed on the operational parameters and some fuel rod design parameters.
Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model
Steven R. Sherman
2007-05-01
The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Horton, B.P.; Peltier, W.R.; Culver, S.J.; Drummond, R.; Engelhart, S.E.; Kemp, A.C.; Mallinson, D.; Thieler, E.R.; Riggs, S.R.; Ames, D.V.; Thomson, K.H.
2009-01-01
We have synthesized new and existing relative sea-level (RSL) data to produce a quality-controlled, spatially comprehensive database from the North Carolina coastline. The RSL database consists of 54 sea-level index points that are quantitatively related to an appropriate tide level and assigned an error estimate, and a further 33 limiting dates that confine the maximum and minimum elevations of RSL. The temporal distribution of the index points is very uneven with only five index points older than 4000 cal a BP, but the form of the Holocene sea-level trend is constrained by both terrestrial and marine limiting dates. The data illustrate RSL rapidly rising during the early and mid Holocene from an observed elevation of -35.7 ?? 1.1 m MSL at 11062-10576 cal a BP to -4.2 m ?? 0.4 m MSL at 4240-3592 cal a BP. We restricted comparisons between observations and predictions from the ICE-5G(VM2) with rotational feedback Glacial Isostatic Adjustment (GIA) model to the Late Holocene RSL (last 4000 cal a BP) because of the wealth of sea-level data during this time interval. The ICE-5G(VM2) model predicts significant spatial variations in RSL across North Carolina, thus we subdivided the observations into two regions. The model forecasts an increase in the rate of sea-level rise in Region 1 (Albemarle, Currituck, Roanoke, Croatan, and northern Pamlico sounds) compared to Region 2 (southern Pamlico, Core and Bogue sounds, and farther south to Wilmington). The observations show Late Holocene sea-level rising at 1.14 ?? 0.03 mm year-1 and 0.82 ?? 0.02 mm year-1 in Regions 1 and 2, respectively. The ICE-5G(VM2) predictions capture the general temporal trend of the observations, although there is an apparent misfit for index points older than 2000 cal a BP. It is presently unknown whether these misfits are caused by possible tectonic uplift associated with the mid-Carolina Platform High or a flaw in the GIA model. A comparison of local tide gauge data with the Late Holocene RSL
Murphy, J.M.
1995-01-01
This paper describes the initialization of an experiment to study the time-dependent response of a high-resolution global coupled ocean-atmosphere general circulation model to a gradual increase in carbon dioxide. The stability of the control integration with respect to climate drift is assessed, and aspects of the model climatology relevant to the simulation of climate change are discussed. The observed variation of oceanic temperature with latitude and depth is basically well simulated, although, in common with other ocean models, the main thermocline is too diffuse. Nevertheless, it is found that large heat and water flux adjustments must be added to the surface layer of the ocean in order to prevent the occurrence of unacceptable climate drift. The ocean model appears to achieve insufficient meridional heat transport, and this is supported by the pattern of the heat flux adjustment term, although errors in the simulated atmosphere-ocean heat flux also contribute to the latter. The application of the flux adjustments restricts climate drift during the 75-year control experiment. However, a gradual warming still occurs in the surface layers of the Southern Ocean because the flux adjustments are inserted as additive terms in this integration and cannot therefore be guaranteed to prevent climate drift completely. 68 refs., 29 figs., 1 tab.
ERIC Educational Resources Information Center
Scaramella, Laura V.; Sohr-Preston, Sara L.; Callahan, Kristin L.; Mirabile, Scott P.
2008-01-01
Hurricane Katrina dramatically altered the level of social and environmental stressors for the residents of the New Orleans area. The Family Stress Model describes a process whereby felt financial strain undermines parents' mental health, the quality of family relationships, and child adjustment. Our study considered the extent to which the Family…
ERIC Educational Resources Information Center
Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed
2010-01-01
Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…
Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.
2012-03-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable
Asthana, Sheena; Gibson, Alex
2011-07-01
The English system of health resource allocation has been described as the apotheosis of the area-level approach to setting health care capitations. However, recent policy developments have changed the scale at which commissioning decisions are made (and budgets allocated) with important implications for resource allocation. Doubts concerning the legitimacy of applying area-based formulae used to distribute resources between Primary Care Trusts (PCTs) to the much smaller scale required by Practice Based Commissioning (PBC) led the English Department of Health (DH) to introduce a new approach to setting health care budgets. To this end, practice-level allocations for acute services are now calculated using a diagnosis-based capitation model of the kind used in the United States and several other systems of competitive social health insurance. The new Coalition Government has proposed that these budgets are directly allocated to GP 'consortia', the new commissioning bodies in the NHS. This paper questions whether this is an appropriate development for a health system in which the major objective of resource allocation is to promote equal opportunity of access for equal needs. The chief reservation raised is that of circularity and the perpetuation of resource bias, the concern being that an existing social, demographic and geographical bias in the use of health care resources will be reinforced through the use of historic utilisation data. Demonstrating that there are legitimate reasons to suspect that this will be the case, the paper poses the question whether health systems internationally should more openly address the key limitations of empirical methods that select risk adjusters on the basis of existing patterns of health service utilisation. PMID:21093953
NASA Technical Reports Server (NTRS)
Rackl, Robert; Weston, Adam
2005-01-01
The literature on turbulent boundary layer pressure fluctuations provides several empirical models which were compared to the measured TU-144 data. The Efimtsov model showed the best agreement. Adjustments were made to improve its agreement further, consisting of the addition of a broad band peak in the mid frequencies, and a minor modification to the high frequency rolloff. The adjusted Efimtsov predicted and measured results are compared for both subsonic and supersonic flight conditions. Measurements in the forward and middle portions of the fuselage have better agreement with the model than those from the aft portion. For High Speed Civil Transport supersonic cruise, interior levels predicted by use of this model are expected to increase by 1-3 dB due to the adjustments to the Efimtsov model. The space-time cross-correlations and cross-spectra of the fluctuating surface pressure were also investigated. This analysis is an important ingredient in structural acoustic models of aircraft interior noise. Once again the measured data were compared to the predicted levels from the Efimtsov model.
Adjustable holder for transducer mounting
NASA Technical Reports Server (NTRS)
Deotsch, R. C.
1980-01-01
Positioning of acoustic sensor, strain gage, or similar transducer is facilitated by adjustable holder. Developed for installation on Space Shuttle, it includes springs for maintaining uniform load on transducer with adjustable threaded cap for precisely controlling position of sensor with respect to surrounding structure.
NASA Astrophysics Data System (ADS)
Wang, Xiao-Lu; Fan, Xiang-Yu; He, Yong-Ming; Nie, Ren-Shi; Huang, Quan-Hua
2013-08-01
Based on material balance and Darcy's law, the governing equation with the quadratic pressure gradient term was deduced. Then the nonlinear model for fluid flow in a multiple-zone composite reservoir including the quadratic gradient term was established and solved using a Laplace transform. A series of standard log-log type curves of 1-zone (homogeneous), 2-zone and 3-zone reservoirs were plotted and nonlinear flow characteristics were analysed. The type curves governed by the coefficient of the quadratic gradient term (β) gradually deviate from those of a linear model with time elapsing. Qualitative and quantitative analyses were implemented to compare the solutions of the linear and nonlinear models. The results showed that differences of pressure transients between the linear and nonlinear models increase with elapsed time and β. At the end, a successful application of the theoretical model data against the field data shows that the nonlinear model will be a good tool to evaluate formation parameters more accurately.
Mathematical Model of Two Phase Flow in Natural Draft Wet-Cooling Tower Including Flue Gas Injection
NASA Astrophysics Data System (ADS)
Hyhlík, Tomáš
2016-03-01
The previously developed model of natural draft wet-cooling tower flow, heat and mass transfer is extended to be able to take into account the flow of supersaturated moist air. The two phase flow model is based on void fraction of gas phase which is included in the governing equations. Homogeneous equilibrium model, where the two phases are well mixed and have the same velocity, is used. The effect of flue gas injection is included into the developed mathematical model by using source terms in governing equations and by using momentum flux coefficient and kinetic energy flux coefficient. Heat and mass transfer in the fill zone is described by the system of ordinary differential equations, where the mass transfer is represented by measured fill Merkel number and heat transfer is calculated using prescribed Lewis factor.
Abdel-Aty, Mohamed; Abdelwahab, Hassan
2004-05-01
This paper presents an analysis of the effect of the geometric incompatibility of light truck vehicles (LTV)--light-duty trucks, vans, and sport utility vehicles--on drivers' visibility of other passenger cars involved in rear-end collisions. The geometric incompatibility arises from the fact that most LTVs ride higher and are wider than regular passenger cars. The objective of this paper is to explore the effect of the lead vehicle's size on the rear-end crash configuration. Four rear-end crash configurations are defined based on the type of the two involved vehicles (lead and following vehicles). Nested logit models were calibrated to estimate the probabilities of the four rear-end crash configurations as a function of driver's age, gender, vehicle type, vehicle maneuver, light conditions, driver's visibility and speed. Results showed that driver's visibility and inattention in the following (striker) vehicle have the largest effect on being involved in a rear-end collision of configuration CarTrk (a regular passenger car striking an LTV). Possibly, indicating a sight distance problem. A driver of a smaller car following an LTV, have a problem seeing the roadway beyond the LTV, and therefore would not be able to adjust his/her speed accordingly, increasing the probability of a rear-end collision. Also, the probability of a CarTrk rear-end crash increases in the case that the lead vehicle stops suddenly. PMID:15003590
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Capitation pricing: adjusting for prior utilization and physician discretion.
Anderson, G F; Cantor, J C; Steinberg, E P; Holloway, J
1986-01-01
As the number of Medicare beneficiaries receiving care under at-risk capitation arrangements increases, the method for setting payment rates will come under increasing scrutiny. A number of modifications to the current adjusted average per capita cost (AAPCC) methodology have been proposed, including an adjustment for prior utilization. In this article, we propose use of a utilization adjustment that includes only hospitalizations involving low or moderate physician discretion in the decision to hospitalize. This modification avoids discrimination against capitated systems that prevent certain discretionary admissions. The model also explains more of the variance in per capita expenditures than does the current AAPCC. PMID:10312010
Scaramella, Laura V.; Sohr-Preston, Sara L.; Callahan, Kristin L.; Mirabile, Scott P.
2010-01-01
Hurricane Katrina dramatically altered the level of social and environmental stressors for the residents of the New Orleans area. The Family Stress Model describes a process whereby felt financial strain undermines parents’ mental health, the quality of family relationships, and child adjustment. Our study considered the extent to which the Family Stress Model explained toddler-aged adjustment among Hurricane Katrina affected and nonaffected families. Two groups of very low-income mothers and their 2-year-old children participated (pre-Katrina, n = 55; post-Katrina, n = 47). Consistent with the Family Stress Model, financial strain and neighborhood violence were associated with higher levels of mothers’ depressed mood; depressed mood was linked to less parenting efficacy. Poor parenting efficacy was associated to more child internalizing and externalizing problems. PMID:18645744
ERIC Educational Resources Information Center
Sabourin, Stephane; Valois, Pierre; Lussier, Yvan
2005-01-01
The main purpose of the current research was to develop an abbreviated form of the Dyadic Adjustment Scale (DAS) with nonparametric item response theory. The authors conducted 5 studies, with a total participation of 8,256 married or cohabiting individuals. Results showed that the item characteristic curves behaved in a monotonically increasing…
Kjelstrom, L.C.
1995-01-01
Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.
NASA Technical Reports Server (NTRS)
Lee, H. P.
1977-01-01
The NASTRAN Thermal Analyzer Manual describes the fundamental and theoretical treatment of the finite element method, with emphasis on the derivations of the constituent matrices of different elements and solution algorithms. Necessary information and data relating to the practical applications of engineering modeling are included.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-27
... in the Federal Register on June 2, 2010 (75 FR 30740). That NPRM proposed to correct an unsafe...'' under the DOT Regulatory Policies and Procedures (44 FR 11034, February 26, 1979); and 3. Will not have.... Model CL-600-2B16 (CL- 604 Variants (Including CL-605 Marketing Variant)) Airplanes AGENCY:...
NASA Technical Reports Server (NTRS)
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
Liu, Wendi; Tang, Sanyi; Xiao, Yanni
2015-01-01
The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537)), 1.2101 (95% CI (1.2084, 1.2119)), 3.0234 (95% CI (2.6063, 3.4881)), and 1.9018 (95% CI (1.8565, 1.9478)), the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958)), 3916 (95% CI (3865, 3967)), 9886 (95% CI (9740, 10031)), and 12633 (95% CI (12515, 12750)) for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic. PMID:26451161
Wadsworth, Martha E; Rindlaub, Laura; Hurwich-Reiss, Eliana; Rienks, Shauna; Bianco, Hannah; Markman, Howard J
2013-01-01
This study tests key tenets of the Adaptation to Poverty-related Stress Model. This model (Wadsworth, Raviv, Santiago, & Etter, 2011 ) builds on Conger and Elder's family stress model by proposing that primary control coping and secondary control coping can help reduce the negative effects of economic strain on parental behaviors central to the family stress model, namely, parental depressive symptoms and parent-child interactions, which together can decrease child internalizing and externalizing problems. Two hundred seventy-five co-parenting couples with children between the ages of 1 and 18 participated in an evaluation of a brief family strengthening intervention, aimed at preventing economic strain's negative cascade of influence on parents, and ultimately their children. The longitudinal path model, analyzed at the couple dyad level with mothers and fathers nested within couple, showed very good fit, and was not moderated by child gender or ethnicity. Analyses revealed direct positive effects of primary control coping and secondary control coping on mothers' and fathers' depressive symptoms. Decreased economic strain predicted more positive father-child interactions, whereas increased secondary control coping predicted less negative mother-child interactions. Positive parent-child interactions, along with decreased parent depression and economic strain, predicted child internalizing and externalizing over the course of 18 months. Multiple-group models analyzed separately by parent gender revealed, however, that child age moderated father effects. Findings provide support for the adaptation to poverty-related stress model and suggest that prevention and clinical interventions for families affected by poverty-related stress may be strengthened by including modules that address economic strain and efficacious strategies for coping with strain. PMID:23323863
NASA Astrophysics Data System (ADS)
Anderson, Benjamin; Kuzyk, Mark G.
2014-03-01
All observations of photodegradation and self-healing follow the predictions of the correlated chromophore domain model [Ramini et al., Polym. Chem. 4, 4948 (2013), 10.1039/c3py00263b]. In the present work, we generalize the domain model to describe the effects of an electric field by including induced dipole interactions between molecules in a domain by means of a self-consistent field approach. This electric field correction is added to the statistical mechanical model to calculate the distribution of domains that are central to healing. Also included in the model are the dynamics due to the formation of an irreversibly damaged species, which we propose involves damage to the polymer mediated through energy transfer from a dopant molecule after absorbing a photon. As in previous studies, the model with one-dimensional domains best explains all experimental data of the population as a function of time, temperature, intensity, concentration, and now applied electric field. Though the precise nature of a domain is yet to be determined, the fact that only one-dimensional domain models are consistent with observations suggests that they might be made of correlated dye molecules along polymer chains. Furthermore, the voltage-dependent measurements suggest that the largest polarizability axis of the molecules are oriented perpendicular to the chain.
Anderson, Benjamin; Kuzyk, Mark G
2014-03-01
All observations of photodegradation and self-healing follow the predictions of the correlated chromophore domain model [Ramini et al., Polym. Chem. 4, 4948 (2013)]. In the present work, we generalize the domain model to describe the effects of an electric field by including induced dipole interactions between molecules in a domain by means of a self-consistent field approach. This electric field correction is added to the statistical mechanical model to calculate the distribution of domains that are central to healing. Also included in the model are the dynamics due to the formation of an irreversibly damaged species, which we propose involves damage to the polymer mediated through energy transfer from a dopant molecule after absorbing a photon. As in previous studies, the model with one-dimensional domains best explains all experimental data of the population as a function of time, temperature, intensity, concentration, and now applied electric field. Though the precise nature of a domain is yet to be determined, the fact that only one-dimensional domain models are consistent with observations suggests that they might be made of correlated dye molecules along polymer chains. Furthermore, the voltage-dependent measurements suggest that the largest polarizability axis of the molecules are oriented perpendicular to the chain. PMID:24730866
MacDonald, Shannon E; Schopflocher, Donald P; Vaudry, Wendy
2014-01-01
Children who begin but do not fully complete the recommended series of childhood vaccines by 2 y of age are a much larger group than those who receive no vaccines. While parents who refuse all vaccines typically express concern about vaccine safety, it is critical to determine what influences parents of ‘partially’ immunized children. This case-control study examined whether parental concern about vaccine safety was responsible for partial immunization, and whether other personal or system-level factors played an important role. A random sample of parents of partially and completely immunized 2 y old children were selected from a Canadian regional immunization registry and completed a postal survey assessing various personal and system-level factors. Unadjusted odds ratios (OR) and adjusted ORs (aOR) were calculated with logistic regression. While vaccine safety concern was associated with partial immunization (OR 7.338, 95% CI 4.138– 13.012), other variables were more strongly associated and reduced the strength of the relationship between concern and partial immunization in multivariable analysis (aOR 2.829, 95% CI 1.151 – 6.957). Other important factors included perceived disease susceptibility and severity (aOR 4.629, 95% CI 2.017 – 10.625), residential mobility (aOR 3.908, 95% CI 2.075 – 7.358), daycare use (aOR 0.310, 95% CI 0.144 - 0.671), number of needles administered at each visit (aOR 7.734, 95% CI 2.598 – 23.025) and access to a regular physician (aOR 0.219, 95% CI 0.057 – 0.846). While concern about vaccine safety may be addressed through educational strategies, this study suggests that additional program and policy-level strategies may positively impact immunization uptake. PMID:25483477
Podziemski, Piotr; Zebrowski, Jan J
2013-08-01
Existing atrial models with detailed anatomical structure and multi-variable cardiac transmembrane current models are too complex to allow to combine an investigation of long time dycal properties of the heart rhythm with the ability to effectively simulate cardiac electrical activity during arrhythmia. Other ways of modeling need to be investigated. Moreover, many state-of-the-art models of the right atrium do not include an atrioventricular node (AVN) and only rarely--the sinoatrial node (SAN). A model of the heart tissue within the right atrium including the SAN and AVN nodes was developed. Looking for a minimal model, currently we are testing our approach on chosen well-known arrhythmias, which were until now obtained only using much more complicated models, or were only observed in a clinical setting. Ultimately, the goal is to obtain a model able to generate sequences of RR intervals specific for the arrhythmias involving the AV junction as well as for other phenomena occurring within the atrium. The model should be fast enough to allow the study of heart rate variability and arrhythmias at a time scale of thousands of heart beats in real-time. In the model of the right atrium proposed here, different kinds of cardiac tissues are described by sets of different equations, with most of them belonging to the class of Liénard nonlinear dynamical systems. We have developed a series of models of the right atrium with differing anatomical simplifications, in the form of a 2D mapping of the atrium or of an idealized cylindrical geometry, including only those anatomical details required to reproduce a given physiological phenomenon. The simulations allowed to reconstruct the phase relations between the sinus rhythm and the location and properties of a parasystolic source together with the effect of this source on the resultant heart rhythm. We model the action potential conduction time alternans through the atrioventricular AVN junction observed in cardiac tissue in
Marsolat, F; De Marzi, L; Pouzoulet, F; Mazal, A
2016-01-21
In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens' model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens' model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens' model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm(-1). These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. PMID:26732530
NASA Astrophysics Data System (ADS)
Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.
2016-01-01
In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.
NASA Astrophysics Data System (ADS)
Hansen, A.; Refsgaard, J.; Christensen, B. S.; Jensen, K. H.
2011-12-01
Nitrate leaching from agricultural areas and the resulting pollution of groundwater and surface waters is one of the largest challenges in water resources management in Denmark. Nitrate can however be naturally degraded under anaerobic conditions and several studies have shown that degradation in the saturated zone removes more than 50% of the nitrate leaching in Danish catchments. For degradation of nitrate to occur in the saturated zone, nitrate must be transported under the redox interface and a correct simulation of the small-scale flow patterns within a catchment is therefore important in nitrate models. The general findings in Danish nitrate modeling studies are that the models perform well at catchment scale, but the predictability of the models decreases at smaller scale. Thus the model predictions are highly uncertain at small scale and the models cannot at present predict areas within a catchment, where the majority of the nitrate is brought under the interface and thus degraded, and areas, where nitrate is transported directly to streams and lakes without any significant reduction. The objective of this study is to test if the small scale performance of a catchment scale nitrate model can be improved by including small scale observation data in the calibration procedure. The study area is the clayey catchment to Lillebæk stream (4.7 km2), located on the island of Funen in Denmark. Due to the presence of clayey top soils subsurface drains are installed and in consequence the stream discharge is highly dominated by drain flow. An integrated transient hydrological model based on the MIKE SHE code has been developed for the study area. The model has been calibrated against hydraulic head measurements and stream discharge measurements from two stations, one covering most of the catchment and the other station approximately half, using the parameter estimator code PEST. Acceptable model performance has been achieved at catchment scale calibrating the model
Nakayama, Rumiko; Nakanishi, Yoshifumi; Nagahama, Fumiyo; Nakajima, Makoto
2015-06-01
The present study examined the influence of interpersonal motivation on university adjustment in freshman students enrolled in a First Year Experience (FYE) class. An interpersonal motivation scale and a university adjustment (interpersonal adjustment and academic adjustment) scale were administered twice to 116 FYE students; data from the 88 students who completed both surveys were analyzed. Results from structural equation modeling indicated a causal relationship between interpersonal, motivation and university adjustment: interpersonal adjustment served as a mediator between academic adjustment and interpersonal motivation, the latter of which was assessed using the internalized motivation subscale of the Interpersonal Motivation Scale as well as the Relative Autonomy Index, which measures the autonomy in students' interpersonal attitudes. Thus, revising the FYE class curriculum to include approaches to lowering students' feelings of obligation and/or anxiety in their interpersonal interactions might improve their adjustment to university. PMID:26182493
NASA Astrophysics Data System (ADS)
Van Zandt, Noah R.; McCrae, Jack E.; Fiorino, Steven T.
2013-05-01
Aimpoint acquisition and maintenance is critical to high energy laser (HEL) system performance. This study demonstrates the development by the AFIT/CDE of a physics-based modeling package, PITBUL, for tracking airborne targets for HEL applications, including atmospheric and sensor effects and active illumination, which is a focus of this work. High-resolution simulated imagery of the 3D airborne target in-flight as seen from the laser position is generated using the HELSEEM model, and includes solar illumination, laser illumination, and thermal emission. Both CW and pulsed laser illumination are modeled, including the effects of illuminator scintillation, atmospheric backscatter, and speckle, which are treated at a first-principles level. Realistic vertical profiles of molecular and aerosol absorption and scattering, as well as optical turbulence, are generated using AFIT/CDE's Laser Environmental Effects Definition and Reference (LEEDR) model. The spatially and temporally varying effects of turbulence are calculated and applied via a fast-running wave optical method known as light tunneling. Sensor effects, for example blur, sampling, read-out noise, and random photon arrival, are applied to the imagery. Track algorithms, including centroid and Fitts correlation, as a part of a closed loop tracker are applied to the degraded imagery and scored, to provide an estimate of overall system performance. To gauge performance of a laser system against a UAV target, tracking results are presented as a function of signal to noise ratio. Additionally, validation efforts to date involving comparisons between simulated and experimental tracking of UAVs are presented.
NASA Astrophysics Data System (ADS)
Kharin, Stanislav; Sarsengeldin, Merey; Kassabek, Samat
2016-08-01
We represent mathematical models of electromagnetic field dynamics and heat transfer in closed symmetric and asymmetric electrical contacts including Thomson effect, which are essentially nonlinear due to the dependence of thermal and electrical conductivities on temperature. Suggested solutions are based on the assumption of identity of equipotentials and isothermal surfaces, which agrees with experimental data and valid for both linear and nonlinear cases. Well known Kohlrausch temperature-potential relation is analytically justified.
NASA Astrophysics Data System (ADS)
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
Ke, Y.; Ortola, S.; Beaucour, A.L.; Dumontet, H.
2010-11-15
An approach which combines both experimental techniques and micromechanical modelling is developed in order to characterise the elastic behaviour of lightweight aggregate concretes (LWAC). More than three hundred LWAC specimens with various lightweight aggregate types (5) of several volume ratios and three different mortar matrices (normal, HP, VHP) are tested. The modelling is based on iterative homogenisation process and includes the ITZ specificities experimentally observed with scanning electron microscopy (SEM). In agreement with experimental measurements, the effects of mix design parameters as well as of the interfacial transition zone (ITZ) on concrete mechanical performances are quantitatively analysed. Confrontations with experimental results allow identifying the elastic moduli of LWA which are difficult to determine experimentally. Whereas the traditional empirical formulas are not sufficiently precise, predictions of LWAC elastic behaviours computed with the micromechanical models appear in good agreement with experimental measurements.
NASA Technical Reports Server (NTRS)
Sakowski, Barbara; Edwards, Daryl; Dickens, Kevin
2014-01-01
Modeling droplet condensation via CFD codes can be very tedious, time consuming, and inaccurate. CFD codes may be tedious and time consuming in terms of using Lagrangian particle tracking approaches or particle sizing bins. Also since many codes ignore conduction through the droplet and or the degradating effect of heat and mass transfer if noncondensible species are present, the solutions may be inaccurate. The modeling of a condensing spray chamber where the significant size of the water droplets and the time and distance these droplets take to fall, can make the effect of droplet conduction a physical factor that needs to be considered in the model. Furthermore the presence of even a relatively small amount of noncondensible has been shown to reduce the amount of condensation [Ref 1]. It is desirable then to create a modeling tool that addresses these issues. The path taken to create such a tool is illustrated. The application of this tool and subsequent results are based on the spray chamber in the Spacecraft Propulsion Research Facility (B2) located at NASA's Plum Brook Station that tested an RL-10 engine. The platform upon which the condensation physics is modeled is SINDAFLUINT. The use of SINDAFLUINT enables the ability to model various aspects of the entire testing facility, including the rocket exhaust duct flow and heat transfer to the exhaust duct wall. The ejector pumping system of the spray chamber is also easily implemented via SINDAFLUINT. The goal is to create a transient one dimensional flow and heat transfer model beginning at the rocket, continuing through the condensing spray chamber, and finally ending with the ejector pumping system. However the model of the condensing spray chamber may be run independently of the rocket and ejector systems detail, with only appropriate mass flow boundary conditions placed at the entrance and exit of the condensing spray chamber model. The model of the condensing spray chamber takes into account droplet
NASA Technical Reports Server (NTRS)
Sakowski, Barbara A.; Edwards, Daryl; Dickens, Kevin
2014-01-01
Modeling droplet condensation via CFD codes can be very tedious, time consuming, and inaccurate. CFD codes may be tedious and time consuming in terms of using Lagrangian particle tracking approaches or particle sizing bins. Also since many codes ignore conduction through the droplet and or the degradating effect of heat and mass transfer if noncondensible species are present, the solutions may be inaccurate. The modeling of a condensing spray chamber where the significant size of the water droplets and the time and distance these droplets take to fall, can make the effect of droplet conduction a physical factor that needs to be considered in the model. Furthermore the presence of even a relatively small amount of noncondensible has been shown to reduce the amount of condensation. It is desirable then to create a modeling tool that addresses these issues. The path taken to create such a tool is illustrated. The application of this tool and subsequent results are based on the spray chamber in the Spacecraft Propulsion Research Facility (B2) located at NASA's Plum Brook Station that tested an RL-10 engine. The platform upon which the condensation physics is modeled is SINDAFLUINT. The use of SINDAFLUINT enables the ability to model various aspects of the entire testing facility, including the rocket exhaust duct flow and heat transfer to the exhaust duct wall. The ejector pumping system of the spray chamber is also easily implemented via SINDAFLUINT. The goal is to create a transient one dimensional flow and heat transfer model beginning at the rocket, continuing through the condensing spray chamber, and finally ending with the ejector pumping system. However the model of the condensing spray chamber may be run independently of the rocket and ejector systems detail, with only appropriate mass flow boundary conditions placed at the entrance and exit of the condensing spray chamber model. The model of the condensing spray chamber takes into account droplet conduction as
Adjusting Population Risk for Functional Health Status.
Fuller, Richard L; Hughes, John S; Goldfield, Norbert I
2016-04-01
Risk adjustment accounts for differences in population mix by reducing the likelihood of enrollee selection by managed care plans and providing a correction to otherwise biased reporting of provider or plan performance. Functional health status is not routinely included within risk-adjustment methods, but is believed by many to be a significant enhancement to risk adjustment for complex enrollees and patients. In this analysis a standardized measure of functional health was created using 3 different source functional assessment instruments submitted to the Medicare program on condition of payment. The authors use a 5% development sample of Medicare claims from 2006 and 2007, including functional health assessments, and develop a model of functional health classification comprising 9 groups defined by the interaction of self-care, mobility, incontinence, and cognitive impairment. The 9 functional groups were used to augment Clinical Risk Groups, a diagnosis-based patient classification system, and when using a validation set of 100% of Medicare data for 2010 and 2011, this study found the use of the functional health module to improve the fit of observed enrollee cost, measured by the R(2) statistic, by 5% across all Medicare enrollees. The authors observed complex nonlinear interactions across functional health domains when constructing the model and caution that functional health status needs careful handling when used for risk adjustment. The addition of functional health status within existing risk-adjustment models has the potential to improve equitable resource allocation in the financing of care costs for more complex enrollees if handled appropriately. (Population Health Management 2016;19:136-144). PMID:26348621
Li, Jue; Inada, Shin; Schneider, Jurgen E.; Zhang, Henggui; Dobrzynski, Halina; Boyett, Mark R.
2014-01-01
The aim of the study was to develop a three-dimensional (3D) anatomically-detailed model of the rabbit right atrium containing the sinoatrial and atrioventricular nodes to study the electrophysiology of the nodes. A model was generated based on 3D images of a rabbit heart (atria and part of ventricles), obtained using high-resolution magnetic resonance imaging. Segmentation was carried out semi-manually. A 3D right atrium array model (∼3.16 million elements), including eighteen objects, was constructed. For description of cellular electrophysiology, the Rogers-modified FitzHugh-Nagumo model was further modified to allow control of the major characteristics of the action potential with relatively low computational resource requirements. Model parameters were chosen to simulate the action potentials in the sinoatrial node, atrial muscle, inferior nodal extension and penetrating bundle. The block zone was simulated as passive tissue. The sinoatrial node, crista terminalis, main branch and roof bundle were considered as anisotropic. We have simulated normal and abnormal electrophysiology of the two nodes. In accordance with experimental findings: (i) during sinus rhythm, conduction occurs down the interatrial septum and into the atrioventricular node via the fast pathway (conduction down the crista terminalis and into the atrioventricular node via the slow pathway is slower); (ii) during atrial fibrillation, the sinoatrial node is protected from overdrive by its long refractory period; and (iii) during atrial fibrillation, the atrioventricular node reduces the frequency of action potentials reaching the ventricles. The model is able to simulate ventricular echo beats. In summary, a 3D anatomical model of the right atrium containing the cardiac conduction system is able to simulate a wide range of classical nodal behaviours. PMID:25380074
Rogers, A.M.; Perkins, D.M.
1996-01-01
underlying mechanisms are completely different. Because this model approximates data characteristics we have observed in an earlier study, we adjusted the parameters of the model to fit a set of smoothed peak accelerations from earthquakes worldwide. These data have not been preselected for particular magnitude or distance ranges and contain earthquake records for magnitudes ranging from about M 3 to M 8 and distance ranging from a few kilometers to about 400 km. In fitting the data, we use a trial-and-error procedure, varying the mean and standard deviation of the patch peak-acceleration distribution, the patch size, and the pulse duration. The model explicitly includes triggering bias, and the triggering threshold is also a model parameter. The data can be approximated equally well by a model that includes the isochrone effect alone, the extremal effect alone, or both effects. Inclusion of both effects is likely to be closest to reality, but because both effects produce similar results, it is not possible to determine the relative contribution of each one. In any case, the model approximates the complex features of the observed data, including a decrease in magnitude scaling with increasing magnitude at short distances and increase in magnitude scaling with magnitude at large distances.
NASA Technical Reports Server (NTRS)
Tournier, Jean-Michel; El-Genk, Mohamed S.
1995-01-01
This report describes the user's manual for 'HPTAM,' a two-dimensional Heat Pipe Transient Analysis Model. HPTAM is described in detail in the UNM-ISNPS-3-1995 report which accompanies the present manual. The model offers a menu that lists a number of working fluids and wall and wick materials from which the user can choose. HPTAM is capable of simulating the startup of heat pipes from either a fully-thawed or frozen condition of the working fluid in the wick structure. The manual includes instructions for installing and running HPTAM on either a UNIX, MS-DOS or VMS operating system. Samples for input and output files are also provided to help the user with the code.
Wai, Rong-Jong; Yang, Zhi-Wei
2008-10-01
This paper focuses on the development of adaptive fuzzy neural network control (AFNNC), including indirect and direct frameworks for an n-link robot manipulator, to achieve high-precision position tracking. In general, it is difficult to adopt a model-based design to achieve this control objective due to the uncertainties in practical applications, such as friction forces, external disturbances, and parameter variations. In order to cope with this problem, an indirect AFNNC (IAFNNC) scheme and a direct AFNNC (DAFNNC) strategy are investigated without the requirement of prior system information. In these model-free control topologies, a continuous-time Takagi-Sugeno (T-S) dynamic fuzzy model with online learning ability is constructed to represent the system dynamics of an n-link robot manipulator. In the IAFNNC, an FNN estimator is designed to tune the nonlinear dynamic function vector in fuzzy local models, and then, the estimative vector is used to indirectly develop a stable IAFNNC law. In the DAFNNC, an FNN controller is directly designed to imitate a predetermined model-based stabilizing control law, and then, the stable control performance can be achieved by only using joint position information. All the IAFNNC and DAFNNC laws and the corresponding adaptive tuning algorithms for FNN weights are established in the sense of Lyapunov stability analyses to ensure the stable control performance. Numerical simulations and experimental results of a two-link robot manipulator actuated by dc servomotors are given to verify the effectiveness and robustness of the proposed methodologies. In addition, the superiority of the proposed control schemes is indicated in comparison with proportional-differential control, fuzzy-model-based control, T-S-type FNN control, and robust neural fuzzy network control systems. PMID:18784015
NASA Technical Reports Server (NTRS)
Tournier, Jean-Michel; El-Genk, Mohamed S.
1995-01-01
A two-dimensional Heat Pipe Transient Analysis Model, 'HPTAM,' was developed to simulate the transient operation of fully-thawed heat pipes and the startup of heat pipes from a frozen state. The model incorporates: (a) sublimation and resolidification of working fluid; (b) melting and freezing of the working fluid in the porous wick; (c) evaporation of thawed working fluid and condensation as a thin liquid film on a frozen substrate; (d) free-molecule, transition, and continuum vapor flow regimes, using the Dusty Gas Model; (e) liquid flow and heat transfer in the porous wick; and (f) thermal and hydrodynamic couplings of phases at their respective interfaces. HPTAM predicts the radius of curvature of the liquid meniscus at the liquid-vapor interface and the radial location of the working fluid level (liquid or solid) in the wick. It also includes the transverse momentum jump condition (capillary relationship of Pascal) at the liquid-vapor interface and geometrically relates the radius of curvature of the liquid meniscus to the volume fraction of vapor in the wick. The present model predicts the capillary limit and partial liquid recess (dryout) in the evaporator wick, and incorporates a liquid pooling submodel, which simulates accumulation of the excess liquid in the vapor core at the condenser end.
NASA Technical Reports Server (NTRS)
Ozguven, H. Nevzat
1991-01-01
A six-degree-of-freedom nonlinear semi-definite model with time varying mesh stiffness has been developed for the dynamic analysis of spur gears. The model includes a spur gear pair, two shafts, two inertias representing load and prime mover, and bearings. As the shaft and bearing dynamics have also been considered in the model, the effect of lateral-torsional vibration coupling on the dynamics of gears can be studied. In the nonlinear model developed several factors such as time varying mesh stiffness and damping, separation of teeth, backlash, single- and double-sided impacts, various gear errors and profile modifications have been considered. The dynamic response to internal excitation has been calculated by using the 'static transmission error method' developed. The software prepared (DYTEM) employs the digital simulation technique for the solution, and is capable of calculating dynamic tooth and mesh forces, dynamic factors for pinion and gear, dynamic transmission error, dynamic bearing forces and torsions of shafts. Numerical examples are given in order to demonstrate the effect of shaft and bearing dynamics on gear dynamics.
Nath, R.; Park, C.H.; King, C.R.; Muench, P. )
1990-09-01
A dose computation model has been developed for the determination of dose distributions around vaginal plaque applicators containing encapsulated {sup 241}Am sources. Encapsulated sources of {sup 241}Am emit primarily 60-keV photons which have a half-value layer thickness of 1/8 mm of lead. This makes possible highly effective {ital in} {ital vivo} shielding of normal tissues at risk, by placing thin lead shields at appropriate places on the applicator. However, self-absorption of photons in the source material itself is intense, requiring bulky sources of about 1 cm diameter. These sources also produce considerable source-to-source shielding which must be taken into account in dose calculations. Our dose computation model for a single source employs three-dimensional integration of dose contributions from volume elements of the source including the effects of absorption and scattering of photons in the source material, titanium encapsulation, and water. An empirical correction to Berger's data on buildup factors of point, isotropic sources is made to account for the effects of anisotropic photon emission by cylindrical {sup 241}Am sources. The second part of our dose computation model takes into account source-to-source shielding effects on both primary and scattered photons for the vaginal plaque geometry. The results of the model have been verified for accuracy by comparisons with extensive dosimetry measurements using lithium fluoride thermoluminescent dosimeters.
Scot Martin
2013-01-31
The chemical evolution of secondary-organic-aerosol (SOA) particles and how this evolution alters their cloud-nucleating properties were studied. Simplified forms of full Koehler theory were targeted, specifically forms that contain only those aspects essential to describing the laboratory observations, because of the requirement to minimize computational burden for use in integrated climate and chemistry models. The associated data analysis and interpretation have therefore focused on model development in the framework of modified kappa-Koehler theory. Kappa is a single parameter describing effective hygroscopicity, grouping together several separate physicochemical parameters (e.g., molar volume, surface tension, and van't Hoff factor) that otherwise must be tracked and evaluated in an iterative full-Koehler equation in a large-scale model. A major finding of the project was that secondary organic materials produced by the oxidation of a range of biogenic volatile organic compounds for diverse conditions have kappa values bracketed in the range of 0.10 +/- 0.05. In these same experiments, somewhat incongruently there was significant chemical variation in the secondary organic material, especially oxidation state, as was indicated by changes in the particle mass spectra. Taken together, these findings then support the use of kappa as a simplified yet accurate general parameter to represent the CCN activation of secondary organic material in large-scale atmospheric and climate models, thereby greatly reducing the computational burden while simultaneously including the most recent mechanistic findings of laboratory studies.
Carlton, Annmarie G; Turpin, Barbara I; Altieri, Katye E; Seitzinger, Sybil P; Mathur, Rohit; Roselle, Shawn J; Weber, Rodney J
2008-12-01
Mounting evidence suggests that low-volatility (particle-phase) organic compounds form in the atmosphere through aqueous phase reactions in clouds and aerosols. Although some models have begun including secondary organic aerosol (SOA) formation through cloud processing, validation studies that compare predictions and measurements are needed. In this work, agreement between modeled organic carbon (OC) and aircraft measurements of water soluble OC improved for all 5 of the compared ICARTT NOAA-P3 flights during August when an in-cloud SOA (SOAcld) formation mechanism was added to CMAQ (a regional-scale atmospheric model). The improvement was most dramatic for the August 14th flight, a flight designed specifically to investigate clouds. During this flight the normalized mean bias for layer-averaged OC was reduced from -64 to -15% and correlation (r) improved from 0.5 to 0.6. Underpredictions of OC aloft by atmospheric models may be explained, in part, by this formation mechanism (SOAcld). OC formation aloft contributes to long-range pollution transport and has implications to radiative forcing, regional air quality and climate. PMID:19192800
Wishart, G C; Bajdik, C D; Dicks, E; Provenzano, E; Schmidt, M K; Sherman, M; Greenberg, D C; Green, A R; Gelmon, K A; Kosma, V-M; Olson, J E; Beckmann, M W; Winqvist, R; Cross, S S; Severi, G; Huntsman, D; Pylkäs, K; Ellis, I; Nielsen, T O; Giles, G; Blomqvist, C; Fasching, P A; Couch, F J; Rakha, E; Foulkes, W D; Blows, F M; Bégin, L R; van't Veer, L J; Southey, M; Nevanlinna, H; Mannermaa, A; Cox, A; Cheang, M; Baglietto, L; Caldas, C; Garcia-Closas, M; Pharoah, P D P
2012-01-01
Background: Predict (www.predict.nhs.uk) is an online, breast cancer prognostication and treatment benefit tool. The aim of this study was to incorporate the prognostic effect of HER2 status in a new version (Predict+), and to compare its performance with the original Predict and Adjuvant!. Methods: The prognostic effect of HER2 status was based on an analysis of data from 10 179 breast cancer patients from 14 studies in the Breast Cancer Association Consortium. The hazard ratio estimates were incorporated into Predict. The validation study was based on 1653 patients with early-stage invasive breast cancer identified from the British Columbia Breast Cancer Outcomes Unit. Predicted overall survival (OS) and breast cancer-specific survival (BCSS) for Predict+, Predict and Adjuvant! were compared with observed outcomes. Results: All three models performed well for both OS and BCSS. Both Predict models provided better BCSS estimates than Adjuvant!. In the subset of patients with HER2-positive tumours, Predict+ performed substantially better than the other two models for both OS and BCSS. Conclusion: Predict+ is the first clinical breast cancer prognostication tool that includes tumour HER2 status. Use of the model might lead to more accurate absolute treatment benefit predictions for individual patients. PMID:22850554
Marcucci, Lorenzo; Washio, Takumi; Yanagida, Toshio
2016-09-01
Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630
NASA Astrophysics Data System (ADS)
Chirinda, Ngonidzashe; Olesen, Jørgen E.; Heckrath, Goswin; Paradelo Pérez, Marcos; Taghizadeh-Toosi, Arezoo
2016-04-01
Globally, soil carbon (C) reserves are second only to those in the ocean, and accounts for a significant C reservoir. In the case of arable soils, the quantity of stored C is influenced by various factors (e.g. management practices). Currently, the topography related influences on in-field soil C dynamics remain largely unknown. However, topography is known to influence a multiplicity of factors that regulate C input, storage and redistribution. To understand the patterns and untangle the complexity of soil C dynamics in arable landscapes, our study was conducted with soils from shoulderslope and footslope positions on a 7.1 ha winter wheat field in western Denmark. We first collected soil samples from shoulderslope and footslope positions with various depth intervals down to 100 cm and analyzed them for physical and chemical properties including texture and soil organic C contents. In-situ carbon dioxide (CO2) concentrations were measured at different soil profile depths at both positions for a year. Soil moisture content and temperature at 5 and 40 cm depth was measured continuously. Additionally, surface soil CO2 fluxes at shoulderslope and footslope positions were measured. We then used measurement data collected from the two landscape positions to calibrate the one-dimensional mechanistic model SOILCO2 module of the HYDRUS-1D software package and obtained soil CO2 fluxes from soil profile at two landscape positions. Furthermore, we tested whether the inclusion of vertical and lateral soil C movement improved the modeling of C dynamics in cultivated landscapes. For that, soil profile CO2 fluxes were compared with those obtained using a simple process-based soil whole profile C model, C-TOOL, which was modified to include vertical and lateral movement of C on landscape. Our results highlight the need to consider vertical and lateral soil C movement in the modeling of C dynamics in cultivated landscapes, for better qualification of net carbon storage.
Ferrer, Javier; Pérez-Martín, Miguel A; Jiménez, Sara; Estrela, Teodoro; Andreu, Joaquín
2012-12-01
This paper describes two different GIS models - one stationary (GeoImpress) and the other non-stationary (Patrical) - that assess water quantity and quality in the Júcar River Basin District, a large river basin district (43,000km(2)) located in Spain. It aims to analyze the status of surface water (SW) and groundwater (GW) bodies in relation to the European Water Framework Directive (WFD) and to support measures to achieve the WFD objectives. The non-stationary model is used for quantitative analysis of water resources, including long-term water resource assessment; estimation of available GW resources; and evaluation of climate change impact on water resources. The main results obtained are the following: recent water resources have been reduced by approximately 18% compared to the reference period 1961-1990; the GW environmental volume required to accomplish the WFD objectives is approximately 30% of the GW annual resources; and the climate change impact on water resources for the short-term (2010-2040), based on a dynamic downscaling A1B scenario, implies a reduction in water resources by approximately 19% compared to 1990-2000 and a reduction of approximately 40-50% for the long-term (2070-2100), based on dynamic downscaling A2 and B2 scenarios. The model also assesses the impact of various fertilizer application scenarios on the status of future GW quality (nitrate) and if these future statuses will meet the WFD requirements. The stationary model generates data on the actual and future chemical status of SW bodies in the river basin according to the modeled scenarios and reflects the implementation of different types of measures to accomplish the Urban Waste Water Treatment Directive and the WFD. Finally, the selection and prioritization of additional measures to accomplish the WFD are based on cost-effectiveness analysis. PMID:22959072
Church, A Timothy; Katigbak, Marcia S; Mazuera Arias, Rina; Rincon, Brigida Carolina; Vargas-Flores, José de Jesús; Ibáñez-Reyes, Joselina; Wang, Lei; Alvarez, Juan M; Wang, Congcong; Ortiz, Fernando A
2014-06-01
In the self-enhancement literature, 2 major controversies remain--whether self-enhancement is a cultural universal and whether it is healthy or maladaptive. Use of the social relations model (SRM; Kenny, 1994) might facilitate resolution of these controversies. We applied the SRM with a round-robin design in both friend and family contexts in 4 diverse cultures: the United States (n = 399), Mexico (n = 413), Venezuela (n = 290), and China (n = 222). Results obtained with social comparison, self-insight, and SRM conceptualizations and indices of self-enhancement were compared for both agentic traits (i.e., egoistic bias) and communal traits (i.e., moralistic bias). Conclusions regarding cultural differences in the prevalence of self-enhancement vs. self-effacement tendencies, and the relationship between self-enhancement and adjustment, varied depending on the index of self-enhancement used. For example, consistent with cultural psychology perspectives, Chinese showed a greater tendency to self-efface than self-enhance using social comparison and self-insight indices, particularly on communal traits in the friend context. However, no cultural differences were observed when perceiver and target effects were controlled using the SRM indices. In all cultures, self-enhancement indices were moderately consistent across friend and family contexts, suggesting traitlike tendencies. To a similar extent in all 4 cultures, self-enhancement tendencies, as measured by the SRM indices, were moderately related to self-rated adjustment, but unrelated, or less so, to observer-rated adjustment. PMID:24841101
2013-01-01
Background Adjusted clinical groups (ACG®) have been widely used to adjust resource distribution; however, the relationship with effectiveness has been questioned. The purpose of the study was to measure the relationship between efficiency assessed by ACG® and a clinical effectiveness indicator in adults attended in Primary Health Care Centres (PHCs). Methods Research design: cross-sectional study. Subjects: 196, 593 patients aged >14 years in 13 PHCs in Catalonia (Spain). Measures: Age, sex, PHC, basic care team (BCT), visits, episodes (diagnoses), and total direct costs of PHC care and co-morbidity as measured by ACG® indicators: Efficiency indices for costs, visits, and episodes (costs EI, visits EI, episodes EI); a complexity or risk index (RI); and effectiveness measured by a general synthetic index (SI). The relationship between EI, RI, and SI in each PHC and BCT was measured by multiple correlation coefficients (r). Results In total, 56 of the 106 defined ACG® were present in the study population, with five corresponding to 44.5% of the patients, 11 to 68.0% of patients, and 30 present in less than 0.5% of the sample. The RI in each PHC ranged from 0.9 to 1.1. Costs, visits, and episodes had similar trends for efficiency in six PHCs. There was moderate correlation between costs EI and visits EI (r = 0.59). SI correlation with episodes EI and costs EI was moderate (r = 0.48 and r = −0.34, respectively) and was r = −0.14 for visits EI. Correlation between RI and SI was r = 0.29. Conclusions The Efficiency and Effectiveness ACG® indicators permit a comparison of primary care processes between PHCs. Acceptable correlation exists between effectiveness and indicators of efficiency in episodes and costs. PMID:24139144
NASA Astrophysics Data System (ADS)
Ruiz De la Cruz, A.; Ferrer, A.; del Hoyo, J.; Siegel, J.; Solis, J.
2011-08-01
In this work, we report a model for accurately calculating the focal volumes corresponding to astigmatic elliptical beams used in fs-laser waveguide writing. The model is based on the use of the ABCD matrix formalism for the propagation of a Gaussian beam. The code includes the effects of propagation on the astigmatic elliptical beam, and the effects of beam truncation and diffraction at the entrance pupil of the focusing objective due to beam clipping when overfilling the pupil. The results predict that for a given astigmatism value and propagation distance it is possible to efficiently suppress the astigmatic focus closer to the surface. This explains previous experimental results where single structure waveguides with controllable aspect-ratio were fabricated using astigmatic-elliptical beams. Furthermore, we investigate the respective roles of astigmatism and beam propagation, as well as the strong impact of truncation and diffraction effects caused by clipping the beam at the pupil of the focusing optics. Finally, based on the results from our model, we present some practical considerations in terms of beam propagation and phase wrapping constraints.
González-Suárez, Ana; Berjano, Enrique; Guerra, Jose M; Gerardo-Giorda, Luca
2016-01-01
Radiofrequency catheter ablation (RFCA) is a routine treatment for cardiac arrhythmias. During RFCA, the electrode-tissue interface temperature should be kept below 80 °C to avoid thrombus formation. Open-irrigated electrodes facilitate power delivery while keeping low temperatures around the catheter. No computational model of an open-irrigated electrode in endocardial RFCA accounting for both the saline irrigation flow and the blood motion in the cardiac chamber has been proposed yet. We present the first computational model including both effects at once. The model has been validated against existing experimental results. Computational results showed that the surface lesion width and blood temperature are affected by both the electrode design and the irrigation flow rate. Smaller surface lesion widths and blood temperatures are obtained with higher irrigation flow rate, while the lesion depth is not affected by changing the irrigation flow rate. Larger lesions are obtained with increasing power and the electrode-tissue contact. Also, larger lesions are obtained when electrode is placed horizontally. Overall, the computational findings are in close agreement with previous experimental results providing an excellent tool for future catheter research. PMID:26938638
NASA Astrophysics Data System (ADS)
Hering, Martin; Brouwer, Jacob; Winkler, Wolfgang
2016-01-01
A micro-tubular solid oxide fuel cell stack model including an integrated cooling system was developed using a quasi three-dimensional, spatially resolved, transient thermodynamic, physical and electrochemical model that accounts for the complex geometrical relations between the cells and cooling-tubes. For the purpose of model evaluation, reference operating, geometrical and material properties are determined. The reference stack design is composed of 3294 cells, with a diameter of 2 mm, and 61 cooling-tubes. The stack is operated at a power density of 300 mW/cm2 and air is used as the cooling fluid inside the integrated cooling system. Regarding the performance, the reference design achieves an electrical stack efficiency of around 57% and a power output of 1.1 kW. The maximum occurring temperature of the positive electrode electrolyte negative electrode (PEN)-structure is 1369 K. As a result of a design of experiments, parameters of a best-case design are determined. The best-case design achieves a comparable power output of 1.1 kW with an electrical efficiency of 63% and a maximum occurring temperature of the PEN-structure of 1268 K. Nevertheless, the best-case design has an increased volume based on the higher diameter of 3 mm and increased spacing between the cells.
González-Suárez, Ana; Berjano, Enrique; Guerra, Jose M.; Gerardo-Giorda, Luca
2016-01-01
Radiofrequency catheter ablation (RFCA) is a routine treatment for cardiac arrhythmias. During RFCA, the electrode-tissue interface temperature should be kept below 80°C to avoid thrombus formation. Open-irrigated electrodes facilitate power delivery while keeping low temperatures around the catheter. No computational model of an open-irrigated electrode in endocardial RFCA accounting for both the saline irrigation flow and the blood motion in the cardiac chamber has been proposed yet. We present the first computational model including both effects at once. The model has been validated against existing experimental results. Computational results showed that the surface lesion width and blood temperature are affected by both the electrode design and the irrigation flow rate. Smaller surface lesion widths and blood temperatures are obtained with higher irrigation flow rate, while the lesion depth is not affected by changing the irrigation flow rate. Larger lesions are obtained with increasing power and the electrode-tissue contact. Also, larger lesions are obtained when electrode is placed horizontally. Overall, the computational findings are in close agreement with previous experimental results providing an excellent tool for future catheter research. PMID:26938638
Poulard, David; Subit, Damien; Donlon, John-Paul; Kent, Richard W
2015-02-26
A method was developed to adjust the posture of a human numerical model to match the pre-impact posture of a human subject. The method involves pulling cables to prescribe the position and orientation of the head, spine and pelvis during a simulation. Six postured models matching the pre-impact posture measured on subjects tested in previous studies were created from a human numerical model. Posture scalars were measured on pre- and after applying the method to evaluate its efficiency. The lateral leaning angle θL defined between T1 and the pelvis in the coronal plane was found to be significantly improved after application with an average difference of 0.1±0.1° with the PMHS (4.6±2.7° before application). This method will be applied in further studies to analyze independently the contribution of pre-impact posture on impact response using human numerical models. PMID:25596635
Koivunoro, H; Schmitz, T; Hippeläinen, E; Liu, Y-H; Serén, T; Kotiluoto, P; Auterinen, I; Savolainen, S
2014-06-01
The mixed neutron-photon beam of FiR 1 reactor is used for boron-neutron capture therapy (BNCT) in Finland. A beam model has been defined for patient treatment planning and dosimetric calculations. The neutron beam model has been validated with an activation foil measurements. The photon beam model has not been thoroughly validated against measurements, due to the fact that the beam photon dose rate is low, at most only 2% of the total weighted patient dose at FiR 1. However, improvement of the photon dose detection accuracy is worthwhile, since the beam photon dose is of concern in the beam dosimetry. In this study, we have performed ionization chamber measurements with multiple build-up caps of different thickness to adjust the calculated photon spectrum of a FiR 1 beam model. PMID:24588987
NASA Astrophysics Data System (ADS)
Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo
2016-04-01
Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed
42 CFR 422.310 - Risk adjustment data.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Risk adjustment data. 422.310 Section 422.310... Organizations § 422.310 Risk adjustment data. (a) Definition of risk adjustment data. Risk adjustment data are all data that are used in the development and application of a risk adjustment payment model. (b)...
42 CFR 422.310 - Risk adjustment data.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Risk adjustment data. 422.310 Section 422.310... Organizations § 422.310 Risk adjustment data. (a) Definition of risk adjustment data. Risk adjustment data are all data that are used in the development and application of a risk adjustment payment model. (b)...
42 CFR 422.310 - Risk adjustment data.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Risk adjustment data. 422.310 Section 422.310....310 Risk adjustment data. (a) Definition of risk adjustment data. Risk adjustment data are all data that are used in the development and application of a risk adjustment payment model. (b)...
NASA Astrophysics Data System (ADS)
de Monserrat, Albert; Morgan, Jason P.
2016-04-01
Materials in Earth's interior are exposed to thermomechanical (e.g. variations in stress/pressure and temperature) and chemical (e.g. phase changes, serpentinization, melting) processes that are associated with volume changes. Most geodynamical codes assume the incompressible Boussinesq approximation, where changes in density due to temperature or phase change effect buoyancy, yet volumetric changes are not allowed, and mass is not locally conserved. Elastic stresses induced by volume changes due to thermal expansion, serpentinization, and melt intrusion should cause 'cold' rocks to brittlely fail at ~1% strain. When failure/yielding is an important rheological feature, we think it plausible that volume-change-linked stresses may have a significant influence on the localization of deformation. Here we discuss a new Lagrangian formulation for "elasto-compressible -visco-plastic" flow. In this formulation, the continuity equation has been generalised from a Boussinesq incompressible formulation to include recoverable, elastic, volumetric deformations linked to the local state of mean compressive stress. This formulation differs from the 'anelastic approximation' used in compressible viscous flow in that pressure- and temperature- dependent volume changes are treated as elastic deformation for a given pressure, temperature, and composition/phase. This leads to a visco-elasto-plastic formulation that can model the effects of thermal stresses, pressure-dependent volume changes, and local phase changes. We use a modified version of the (Miliman-based) FEM code M2TRI to run a set of numerical experiments for benchmarking purposes. Three benchmarks are being used to assess the accuracy of this formulation: (1) model the effects on density of a compressible mantle under the influence of gravity; (2) model the deflection of a visco-elastic beam under the influence of gravity, and its recovery when gravitational loading is artificially removed; (3) Modelling the stresses
Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L
2016-09-01
Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in
Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy
2015-04-01
There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes. PMID:25198414
NASA Technical Reports Server (NTRS)
Suit, W. T.
1986-01-01
Shuttle flight test data were used to determine values for the short-period parameters. The best identified, as judged by its estimated standard deviation, was the elevon effectiveness parameter C (sub m (sub sigma e squared)). However, the scatter about the preflight prediction of C (sub m (sub sigma e squared)) was large. Other investigators have suggested that adding nonlinear terms to the mathematical model used to identify C (sub m (sub sigma e)) could reduce the scatter. The results of this investigation show that C (sub m (sub sigma e squared)) is the only identifiable nonlinear parameter applicable and that the changes in C (sub m (sub sigma e)) values when C (sub m (sub sigma e squared)) is included are in the order of ten percent for the data estimated.
NASA Astrophysics Data System (ADS)
Grotheer, E.; Mangano, V.; Livi, S. A.
2014-12-01
The work presented here builds on the results of Grotheer & Livi [2014], which found that the majority of the vapor produced due to meteoroid impacts on Mercury is caused by meteoroids with masses 4.2 x 10-7 g ≤ m ≤ 8.3 x 10-2 g. Meteoroids with a mass of 2.1 x 10-4 g are the largest contributors to the vapor released by meteoroid impacts, thus here we focus on meteoroids with such masses as an input to a particle tracing simulation called the Hermean Exosphere Model of Oxygen (HEMO). The HEMO simulations include 36 different particle species which can be released via meteoritic impact vaporization, based on the abundances determined by Berezhnoy & Klumov [2008]. After the initial simulation of the meteoroid impact, the released particles are affected by the gravitational pull of the planet Mercury, as well as the Sun's radiation. Particles may be photoionized or in the case of molecules also photodissociated. Due to the effects of photodissociation, a total of 38 species are actually present in the simulation, since 2 species are not directly released by impact vaporization but may be created due to photodissociation. These simulations record various pieces of information about each simulated particle, including position and velocity, for each time-step of the model. This information is then utilized to construct density profiles for each simulation run, as well as for aggregates of simulation runs with similar input parameters. The results are intended to aid the interpretation of results from the MESSENGER and BepiColombo missions to Mercury, with a particular focus on atomic and molecular oxygen. ReferencesAlexey A. Berezhnoy and Boris A. Klumov. Impacts as sources of the exosphere on Mercury. Icarus, 195(2): 511-522, 2008. Emmanuel B. Grotheer and Stefano A. Livi. Small meteoroids' major contribution to Mercury's exosphere. Icarus, 227(1): 1-7, 2014.
NASA Astrophysics Data System (ADS)
Cain, Clarence P.; Polhamus, Garrett D.; Roach, William P.; Stolarski, David J.; Schuster, Kurt J.; Stockton, Kevin; Rockwell, Benjamin A.; Chen, Bo; Welch, Ashley J.
2006-07-01
With the advent of such systems as the airborne laser and advanced tactical laser, high-energy lasers that use 1315-nm wavelengths in the near-infrared band will soon present a new laser safety challenge to armed forces and civilian populations. Experiments in nonhuman primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular, and retinal lesions as a function of pulse duration. American National Standards Institute (ANSI) laser safety standards have traditionally been based on experimental data, and there is scant data for this wavelength. We are reporting minimum visible lesion (MVL) threshold measurements using a porcine skin model for two different pulse durations and spot sizes for this wavelength. We also compare our measurements to results from our model based on the heat transfer equation and rate process equation, together with actual temperature measurements on the skin surface using a high-speed infrared camera. Our MVL-ED50 thresholds for long pulses (350 µs) at 24-h postexposure are measured to be 99 and 83 Jcm-2 for spot sizes of 0.7 and 1.3 mm diam, respectively. Q-switched laser pulses of 50 ns have a lower threshold of 11 Jcm-2 for a 5-mm-diam top-hat laser pulse.
Wang, Ching-Yun; Dieu Tapsoba, Jean De; Duggan, Catherine; Campbell, Kristin L; McTiernan, Anne
2016-05-10
In many biomedical studies, covariates of interest may be measured with errors. However, frequently in a regression analysis, the quantiles of the exposure variable are often used as the covariates in the regression analysis. Because of measurement errors in the continuous exposure variable, there could be misclassification in the quantiles for the exposure variable. Misclassification in the quantiles could lead to bias estimation in the association between the exposure variable and the outcome variable. Adjustment for misclassification will be challenging when the gold standard variables are not available. In this paper, we develop two regression calibration estimators to reduce bias in effect estimation. The first estimator is normal likelihood-based. The second estimator is linearization-based, and it provides a simple and practical correction. Finite sample performance is examined via a simulation study. We apply the methods to a four-arm randomized clinical trial that tested exercise and weight loss interventions in women aged 50-75years. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26593772
Kim, Kyoung Min; Jang, Hak Chul; Lim, Soo
2016-01-01
Aging processes are inevitably accompanied by structural and functional changes in vital organs. Skeletal muscle, which accounts for 40% of total body weight, deteriorates quantitatively and qualitatively with aging. Skeletal muscle is known to play diverse crucial physical and metabolic roles in humans. Sarcopenia is a condition characterized by significant loss of muscle mass and strength. It is related to subsequent frailty and instability in the elderly population. Because muscle tissue is involved in multiple functions, sarcopenia is closely related to various adverse health outcomes. Along with increasing recognition of the clinical importance of sarcopenia, several international study groups have recently released their consensus on the definition and diagnosis of sarcopenia. In practical terms, various skeletal muscle mass indices have been suggested for assessing sarcopenia: appendicular skeletal muscle mass adjusted for height squared, weight, or body mass index. A different prevalence and different clinical implications of sarcopenia are highlighted by each definition. The discordances among these indices have emerged as an issue in defining sarcopenia, and a unifying definition for sarcopenia has not yet been attained. This review aims to compare these three operational definitions and to introduce an optimal skeletal muscle mass index that reflects the clinical implications of sarcopenia from a metabolic perspective. PMID:27334763
Turner, M.G.; Jennions, I.K. )
1993-04-01
An explicit Navier-Stokes solver has been written with the option of using one of two types of turbulence model. One is the Baldwin-Lomax algebraic model and the other is an implicit k-[var epsilon] model which has been coupled with the explicit Navier-Stokes solver in a novel way. This type of coupling, which uses two different solution methods, is unique and combines the overall robustness of the implicit k-[var epsilon] solver with the simplicity of the explicit solver. The resulting code has been applied to the solution of the flow in a transonic fan rotor, which has been experimentally investigated by Wennerstrom. Five separate solutions, each identical except for the turbulence modeling details, have been obtained and compared with the experimental results. The five different turbulence models run were: the standard Baldwin-Lomax model both with and without wall functions, the Baldwin-Lomax model with modified constants and wall functions, a standard k-[var epsilon] model, and an extended k-[var epsilon] model, which accounts for multiple time scales by adding an extra term to the dissipation equation. In general, as the model includes more of the physics, the computed shock position becomes closer to the experimental results.
NASA Astrophysics Data System (ADS)
Akbari, Abolghasem
2015-10-01
The Natural Resources Conservation Service Curve Number (NRCS-CN) method is widely used for predicting direct runoff from rainfall. It employs the hydrologic soil groups and landuse information along with period soil moisture conditions to derive NRCS-CN. This method has been well documented and available in popular rainfall-runoff models such as HEC-HMS, SWAT, SWMM and many more. The Sharply-Williams and Hank methods was used to adjust CN values provided in standard table of TR-55. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) is used to derive slope map with spatial resolution of 30 m for Kuantan River Basin (KRB). The two investigated method stretches the conventional CN domain to the lower values. The study shows a successful application of remote sensing data and GIS tools in hydrological studies. The result of this work can be used for rainfall-runoff simulation and flood modeling in KRB.
ADJUSTABLE DOUBLE PULSE GENERATOR
Gratian, J.W.; Gratian, A.C.
1961-08-01
>A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)
Adjustable sutures in children.
Engel, J Mark; Guyton, David L; Hunter, David G
2014-06-01
Although adjustable sutures are considered a standard technique in adult strabismus surgery, most surgeons are hesitant to attempt the technique in children, who are believed to be unlikely to cooperate for postoperative assessment and adjustment. Interest in using adjustable sutures in pediatric patients has increased with the development of surgical techniques specific to infants and children. This workshop briefly reviews the literature supporting the use of adjustable sutures in children and presents the approaches currently used by three experienced strabismus surgeons. PMID:24924284
Svensson, Elin M.; Aweeka, Francesca; Park, Jeong-Gun; Marzan, Florence; Karlsson, Mats O.
2013-01-01
Safe, effective concomitant treatment regimens for tuberculosis (TB) and HIV infection are urgently needed. Bedaquiline (BDQ) is a promising new anti-TB drug, and efavirenz (EFV) is a commonly used antiretroviral. Due to EFV's induction of cytochrome P450 3A4, the metabolic enzyme responsible for BDQ biotransformation, the drugs are expected to interact. Based on data from a phase I, single-dose pharmacokinetic study, a nonlinear mixed-effects model characterizing BDQ pharmacokinetics and interaction with multiple-dose EFV was developed. BDQ pharmacokinetics were best described by a 3-compartment disposition model with absorption through a dynamic transit compartment model. Metabolites M2 and M3 were described by 2-compartment models with clearance of BDQ and M2, respectively, as input. Impact of induction was described as an instantaneous change in clearance 1 week after initialization of EFV treatment and estimated for all compounds. The model predicts average steady-state concentrations of BDQ and M2 to be reduced by 52% (relative standard error [RSE], 3.7%) with chronic coadministration. A range of models with alternative structural assumptions regarding onset of induction effect and fraction metabolized resulted in similar estimates of the typical reduction and did not offer a markedly better fit to data. Simulations to investigate alternative regimens mitigating the estimated interaction effect were performed. The results suggest that simple adjustments of the standard regimen during EFV coadministration can prevent reduced exposure to BDQ without increasing exposures to M2. However, exposure to M3 would increase. Evaluation in clinical trials of adjusted regimens is necessary to ensure appropriate dosing for HIV-infected TB patients on an EFV-based regimen. PMID:23571542
Subsea adjustable choke valves
Cyvas, M.K. )
1989-08-01
With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.
Agogo, George O; van der Voet, Hilko; Van't Veer, Pieter; van Eeuwijk, Fred A; Boshuizen, Hendriek C
2016-07-01
Dietary questionnaires are prone to measurement error, which bias the perceived association between dietary intake and risk of disease. Short-term measurements are required to adjust for the bias in the association. For foods that are not consumed daily, the short-term measurements are often characterized by excess zeroes. Via a simulation study, the performance of a two-part calibration model that was developed for a single-replicate study design was assessed by mimicking leafy vegetable intake reports from the multicenter European Prospective Investigation into Cancer and Nutrition (EPIC) study. In part I of the fitted two-part calibration model, a logistic distribution was assumed; in part II, a gamma distribution was assumed. The model was assessed with respect to the magnitude of the correlation between the consumption probability and the consumed amount (hereafter, cross-part correlation), the number and form of covariates in the calibration model, the percentage of zero response values, and the magnitude of the measurement error in the dietary intake. From the simulation study results, transforming the dietary variable in the regression calibration to an appropriate scale was found to be the most important factor for the model performance. Reducing the number of covariates in the model could be beneficial, but was not critical in large-sample studies. The performance was remarkably robust when fitting a one-part rather than a two-part model. The model performance was minimally affected by the cross-part correlation. PMID:27003183
Halvorsen, A.M.K.; Santvedt, T.
1999-11-01
A corrosion rate model is developed for carbon steel in water containing CO{sub 2} at different temperatures, pH`s, CO{sub 2} fugacities and wall shear stresses. The model is based on loop experiments at temperatures from 20--160 C. The data are taken from a database containing more than 2,400 data points at various temperatures, CO{sub 2} fugacities, pH`s and wall shear stresses. To find the best fit of the data, data for each temperature present in the data base was evaluated separately to find typical trends for the change in corrosion rate versus CO{sub 2} fugacity, wall shear stress and pH. To facilitate use of the corrosion model a simplified method for calculating wall shear stress in multiphase flow is included. This model includes a viscosity model for dispersions and is developed for oil wet and water wet flow. Criteria for the maximum production rate to avoid mesa attach in straight sections and behind welds is also included.
Valcke, M.; Nong, A.; Krishnan, K.
2012-01-01
The objective of this study was to evaluate the impact of whole- and sub-population-related variabilities on the determination of the human kinetic adjustment factor (HKAF) used in risk assessment of inhaled volatile organic chemicals (VOCs). Monte Carlo simulations were applied to a steady-state algorithm to generate population distributions for blood concentrations (CAss) and rates of metabolism (RAMs) for inhalation exposures to benzene (BZ) and 1,4-dioxane (1,4-D). The simulated population consisted of various proportions of adults, elderly, children, neonates and pregnant women as per the Canadian demography. Subgroup-specific input parameters were obtained from the literature and P3M software. Under the “whole population” approach, the HKAF was computed as the ratio of the entire population's upper percentile value (99th, 95th) of dose metrics to the median value in either the entire population or the adult population. Under the “distinct subpopulation” approach, the upper percentile values in each subpopulation were considered, and the greatest resulting HKAF was retained. CAss-based HKAFs that considered the Canadian demography varied between 1.2 (BZ) and 2.8 (1,4-D). The “distinct subpopulation” CAss-based HKAF varied between 1.6 (BZ) and 8.5 (1,4-D). RAM-based HKAFs always remained below 1.6. Overall, this study evaluated for the first time the impact of underlying assumptions with respect to the interindividual variability considered (whole population or each subpopulation taken separately) when determining the HKAF. PMID:22523487
ERIC Educational Resources Information Center
Wilcox, Rand R.
1989-01-01
Two methods of handling unequal variances in the two-way fixed effects analysis of variance (ANOVA) model are described. One is based on an improved Wilcox (1988) method for the one-way model, and the other is an extension of G. S. James' (1951) second order method. (TJH)
ERIC Educational Resources Information Center
McKinney, Cliff; Renk, Kimberly
2008-01-01
Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Nyst, M.; Apel, E. V.; Muir-Wood, R.
2014-12-01
The recent Canterbury earthquake sequence (CES) renewed public and academic awareness concerning the clustered nature of seismicity. Multiple event occurrence in short time and space intervals is reminiscent of aftershock sequences, but aftershock is a statistical definition, not a label one can give an earthquake in real-time. Aftershocks are defined collectively as what creates the Omori event rate decay after a large event or are defined as what is taken away as "dependent events" using a declustering method. It is noteworthy that depending on the declustering method used on the Canterbury earthquake sequence, the number of independent events varies a lot. This lack of unambiguous definition of aftershocks leads to the need to investigate the amount of clustering inherent in "declustered" risk models. This is the task we concentrate on in this contribution. We start from a background source model for the Canterbury region, in which 1) centroids of events of given magnitude are distributed using a latin-hypercube lattice, 2) following the range of preferential orientations determined from stress maps and focal mechanism, 3) with length determined using the local scaling relationship and 4) rates from a and b values derived from the declustered pre-2010 catalog. We then proceed to create tens of thousands of realizations of 6 to 20 year periods, and we define criteria to identify which successions of events in the region would be perceived as a sequence. Note that the spatial clustering expected is a lower end compared to a fully uniform distribution of events. Then we perform the same exercise with rates and b-values determined from the catalog including the CES. If the pre-2010 catalog was long (or rich) enough, then the computed "stationary" rates calculated from it would include the CES declustered events (by construction, regardless of the physical meaning of or relationship between those events). In regions of low seismicity rate (e.g., Canterbury before
Oluwole, Akinola S.; Ekpo, Uwem F.; Karagiannis-Voules, Dimitrios-Alexios; Abe, Eniola M.; Olamiju, Francisca O.; Isiyaku, Sunday; Okoronkwo, Chukwu; Saka, Yisa; Nebe, Obiageli J.; Braide, Eka I.; Mafiana, Chiedu F.; Utzinger, Jürg; Vounatsou, Penelope
2015-01-01
Background The acceleration of the control of soil-transmitted helminth (STH) infections in Nigeria, emphasizing preventive chemotherapy, has become imperative in light of the global fight against neglected tropical diseases. Predictive risk maps are an important tool to guide and support control activities. Methodology STH infection prevalence data were obtained from surveys carried out in 2011 using standard protocols. Data were geo-referenced and collated in a nationwide, geographic information system database. Bayesian geostatistical models with remotely sensed environmental covariates and variable selection procedures were utilized to predict the spatial distribution of STH infections in Nigeria. Principal Findings We found that hookworm, Ascaris lumbricoides, and Trichuris trichiura infections are endemic in 482 (86.8%), 305 (55.0%), and 55 (9.9%) locations, respectively. Hookworm and A. lumbricoides infection co-exist in 16 states, while the three species are co-endemic in 12 states. Overall, STHs are endemic in 20 of the 36 states of Nigeria, including the Federal Capital Territory of Abuja. The observed prevalence at endemic locations ranged from 1.7% to 51.7% for hookworm, from 1.6% to 77.8% for A. lumbricoides, and from 1.0% to 25.5% for T. trichiura. Model-based predictions ranged from 0.7% to 51.0% for hookworm, from 0.1% to 82.6% for A. lumbricoides, and from 0.0% to 18.5% for T. trichiura. Our models suggest that day land surface temperature and dense vegetation are important predictors of the spatial distribution of STH infection in Nigeria. In 2011, a total of 5.7 million (13.8%) school-aged children were predicted to be infected with STHs in Nigeria. Mass treatment at the local government area level for annual or bi-annual treatment of the school-aged population in Nigeria in 2011, based on World Health Organization prevalence thresholds, were estimated at 10.2 million tablets. Conclusions/Significance The predictive risk maps and estimated
NASA Astrophysics Data System (ADS)
Javaheri, Amir; Babbar-Sebens, Meghna; Miller, Robert N.
2016-06-01
Data Assimilation (DA) has been proposed for multiple water resources studies that require rapid employment of incoming observations to update and improve accuracy of operational prediction models. The usefulness of DA approaches in assimilating water temperature observations from different types of monitoring technologies (e.g., remote sensing and in-situ sensors) into numerical models of in-land water bodies (e.g., lakes and reservoirs) has, however, received limited attention. In contrast to in-situ temperature sensors, remote sensing technologies (e.g., satellites) provide the benefit of collecting measurements with better X-Y spatial coverage. However, assimilating water temperature measurements from satellites can introduce biases in the updated numerical model of water bodies because the physical region represented by these measurements do not directly correspond with the numerical model's representation of the water column. This study proposes a novel approach to address this representation challenge by coupling a skin temperature adjustment technique based on available air and in-situ water temperature observations, with an ensemble Kalman filter based data assimilation technique. Additionally, the proposed approach used in this study for four-dimensional analysis of a reservoir provides reasonably accurate surface layer and water column temperature forecasts, in spite of the use of a fairly small ensemble. Application of the methodology on a test site - Eagle Creek Reservoir - in Central Indiana demonstrated that assimilation of remotely sensed skin temperature data using the proposed approach improved the overall root mean square difference between modeled surface layer temperatures and the adjusted remotely sensed skin temperature observations from 5.6°C to 0.51°C (i.e., 91% improvement). In addition, the overall error in the water column temperature predictions when compared with in-situ observations also decreased from 1.95°C (before assimilation
NASA Astrophysics Data System (ADS)
Klein, Christian; Biernath, Christian; Thieme, Christoph; Heinlein, Florian; Priesack, Eckart
2014-05-01
Recent studies show that uncertainties in regional and global weather and climate simulations are partly caused by inadequate descriptions of the soil-plant-atmosphere system. Particularly relevant for the improvement of regional weather forecast are models which better describe the feedback fluxes between the land surface and the atmosphere, which influences surface temperature, surface air pressure and the amount and frequency of precipitation events. Aim of this study was to examine the differences between weather simulation results by the "Regional Climate and Weather Forecast Model" (WRF) using either the frequently applied land surface model NOAH or Expert-N. Where the standard model NOAH distinguish between vegetation class specific monthly changing soil cover values (leaf area index) and defined soil characteristics, Expert-N is an ecosystem model that allows the application of more mechanistic soil and plant sub-models including the management of soil and vegetation and effects of water and nutrient availability on plant growth are considered. The WRF-NOAH model was applied with a default land surface configuration typical for the simulation domain Bavaria, Germany. Expert-N was configured using the Hurley Pasture Model to simulate plant growth and calibrated using vegetation, management and soil data from one grassland site. Both models were applied to the simulation domain. The simulation results of energy fluxes in both models between the land surface and the atmosphere were compared with each other's and with weather data from about 100 weather stations in Bavaria using statistical methods. The influence of different harvest scenarios on the energy fluxes is discussed. The simulation shows the high impact of vegetation management on the energy fluxes which caused significant differences between weather characteristics such as the simulated surface temperatures and precipitation events on the regional scale. Therefore, we conclude that weather forecast
NASA Astrophysics Data System (ADS)
McKean, John R.; Johnson, Donn; Taylor, R. Garth
2003-04-01
An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.
Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu
2015-01-01
In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology. PMID:26690444
Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu
2015-01-01
In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology. PMID:26690444
NASA Astrophysics Data System (ADS)
Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.
2016-03-01
The thickness and equivalent global sea-level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea-level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5 G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30% decrease and an average ˜20-25% increase to the load thickness relative to the ICE-5 G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5 G, and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5 G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6 G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea-level equivalent compared to ICE-5 G at LGM.
NASA Astrophysics Data System (ADS)
Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.
2016-06-01
The thickness and equivalent global sea level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30 per cent decrease and an average ˜20-25 per cent increase to the load thickness relative to the ICE-5G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5G and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea level equivalent compared to ICE-5G at LGM.
NASA Astrophysics Data System (ADS)
Yin, J.; Cumberland, S. A.; Harrison, R. M.; Allan, J.; Young, D. E.; Williams, P. I.; Coe, H.
2014-09-01
PM2.5 was collected during a winter campaign at two southern England sites, urban background North Kensington (NK) and rural Harwell (HAR), in January-February 2012. Multiple organic and inorganic source tracers were analysed and used in a Chemical Mass Balance (CMB) model, which apportioned seven separate primary sources, that explained on average 53% (NK) and 56% (HAR) of the organic carbon (OC), including traffic, woodsmoke, food cooking, coal combustion, vegetative detritus, natural gas and dust/soil. With the addition of source tracers for secondary biogenic aerosol at the NK site, 79% of organic carbon was accounted for. Secondary biogenic sources were represented by oxidation products of α-pinene and isoprene, but only the former made a substantial contribution to OC. Particle source contribution estimates for PM2.5 mass were obtained by the conversion of the OC estimates and combining with inorganic components ammonium nitrate, ammonium sulphate and sea salt. Good mass closure was achieved with 8% (92% with the addition of the secondary biogenic source) and 83% of the PM2.5 mass explained at NK and HAR respectively, with the remainder being secondary organic matter. While the most important sources of OC are vehicle exhaust (21 and 16%) and woodsmoke (15% and 28%) at NK and HAR respectively, food cooking emissions are also significant, particularly at the urban NK site (11% of OC), in addition to the secondary biogenic source, only measured at NK, which represented about 26%. In comparison, the major source components for PM2.5 at NK and HAR are inorganic ammonium salts (51 and 56%), vehicle exhaust emissions (8 and 6%), secondary biogenic (10% measured at NK only), woodsmoke (4 and 7%) and sea salt (7 and 8%), whereas food cooking (4% and 1%) showed relatively smaller contributions to PM2.5. Results from the CMB model were compared with source contribution estimates derived from the AMS-PMF method. The overall mass of organic matter accounted for is
NASA Astrophysics Data System (ADS)
Yin, J.; Cumberland, S. A.; Harrison, R. M.; Allan, J.; Young, D. E.; Williams, P. I.; Coe, H.
2015-02-01
PM2.5 was collected during a winter campaign at two southern England sites, urban background North Kensington (NK) and rural Harwell (HAR), in January-February 2012. Multiple organic and inorganic source tracers were analysed and used in a Chemical Mass Balance (CMB) model, which apportioned seven separate primary sources, that explained on average 53% (NK) and 56% (HAR) of the organic carbon (OC), including traffic, woodsmoke, food cooking, coal combustion, vegetative detritus, natural gas and dust/soil. With the addition of source tracers for secondary biogenic aerosol at the NK site, 79% of organic carbon was accounted for. Secondary biogenic sources were represented by oxidation products of α-pinene and isoprene, but only the former made a substantial contribution to OC. Particle source contribution estimates for PM2.5 mass were obtained by the conversion of the OC estimates and combining with inorganic components ammonium nitrate, ammonium sulfate and sea salt. Good mass closure was achieved with 81% (92% with the addition of the secondary biogenic source) and 83% of the PM2.5 mass explained at NK and HAR respectively, with the remainder being secondary organic matter. While the most important sources of OC are vehicle exhaust (21 and 16%) and woodsmoke (15 and 28%) at NK and HAR respectively, food cooking emissions are also significant, particularly at the urban NK site (11% of OC), in addition to the secondary biogenic source, only measured at NK, which represented about 26%. In comparison, the major source components for PM2.5 at NK and HAR are inorganic ammonium salts (51 and 56%), vehicle exhaust emissions (8 and 6%), secondary biogenic (10% measured at NK only), woodsmoke (4 and 7%) and sea salt (7 and 8%), whereas food cooking (4 and 1%) showed relatively smaller contributions to PM2.5. Results from the CMB model were compared with source contribution estimates derived from the AMS-PMF method. The overall mass of organic matter accounted for is rather
NASA Astrophysics Data System (ADS)
Pankoke, S.; Buck, B.; Woelfel, H. P.
1998-08-01
Long-term whole-body vibrations can cause degeneration of the lumbar spine. Therefore existing degeneration has to be assessed as well as industrial working places to prevent further damage. Hence, the mechanical stress in the lumbar spine—especially in the three lower vertebrae—has to be known. This stress can be expressed as internal forces. These internal forces cannot be evaluated experimentally, because force transducers cannot be implementated in the force lines because of ethical reasons. Thus it is necessary to calculate the internal forces with a dynamic mathematical model of sitting man.A two dimensional dynamic Finite Element model of sitting man is presented which allows calculation of these unknown internal forces. The model is based on an anatomic representation of the lower lumbar spine (L3-L5). This lumber spine model is incorporated into a dynamic model of the upper torso with neck, head and arms as well as a model of the body caudal to the lumbar spine with pelvis and legs. Additionally a simple dynamic representation of the viscera is used. All these parts are modelled as rigid bodies connected by linear stiffnesses. Energy dissipation is modelled by assigning modal damping ratio to the calculated undamped eigenvalues. Geometry and inertial properties of the model are determined according to human anatomy. Stiffnesses of the spine model are derived from static in-vitro experiments in references [1] and [2]. Remaining stiffness parameters and parameters for energy dissipation are determined by using parameter identification to fit measurements in reference [3]. The model, which is available in 3 different postures, allows one to adjust its parameters for body height and body mass to the values of the person for which internal forces have to be calculated.
NASA Astrophysics Data System (ADS)
Sjoeholm, K. R.
1981-02-01
The dual approach to the theory of production is used to estimate factor demand functions of the Swedish manufacturing industry. Two approximations of the cost function, the translog and the generalized Leontief models, are used. The price elasticities of the factor demand do not seem to depend on the choice of model. This is at least true as to the sign pattern and as to the inputs capital, labor, total energy and other materials. Total energy is separated into solid fuels, gasoline, fuel oil, electricity and a residual. Fuel oil and electricity are found to be substitutes by both models. Capital and energy are shown to be substitutes. This implies that Swedish industry will save more energy if the capital cost can be reduced. Both models are, in the best versions, able to detect an inappropriate variable. The assumption of perfect competition on the product market, is shown to be inadequate by both models. When this assumption is relaxed, the normal substitution pattern among the inputs is resumed.
Parental Attachment, Interparental Conflict, and Young Adults' Emotional Adjustment
ERIC Educational Resources Information Center
Ross, Jennifer; Fuertes, Jairo
2010-01-01
This study extends Engels et al.'s model of emotional adjustment to young adults and includes the constructs of interparental conflict and conflict resolution. Results indicate that parental attachment is better conceived as a two-factor construct of mother and father attachment and that although attachment to both mothers and fathers directly…
NASA Astrophysics Data System (ADS)
Del Ben, Mauro; Hutter, Jürg; VandeVondele, Joost
2015-08-01
Water is a ubiquitous liquid that displays a wide range of anomalous properties and has a delicate structure that challenges experiment and simulation alike. The various intermolecular interactions that play an important role, such as repulsion, polarization, hydrogen bonding, and van der Waals interactions, are often difficult to reproduce faithfully in atomistic models. Here, electronic structure theories including all these interactions at equal footing, which requires the inclusion of non-local electron correlation, are used to describe structure and dynamics of bulk liquid water. Isobaric-isothermal (NpT) ensemble simulations based on the Random Phase Approximation (RPA) yield excellent density (0.994 g/ml) and fair radial distribution functions, while various other density functional approximations produce scattered results (0.8-1.2 g/ml). Molecular dynamics simulation in the microcanonical (NVE) ensemble based on Møller-Plesset perturbation theory (MP2) yields dynamical properties in the condensed phase, namely, the infrared spectrum and diffusion constant. At the MP2 and RPA levels of theory, ice is correctly predicted to float on water, resolving one of the anomalies as resulting from a delicate balance between van der Waals and hydrogen bonding interactions. For several properties, obtaining quantitative agreement with experiment requires correction for nuclear quantum effects (NQEs), highlighting their importance, for structure, dynamics, and electronic properties. A computed NQE shift of 0.6 eV for the band gap and absorption spectrum illustrates the latter. Giving access to both structure and dynamics of condensed phase systems, non-local electron correlation will increasingly be used to study systems where weak interactions are of paramount importance.
Del Ben, Mauro; Hutter, Jürg; VandeVondele, Joost
2015-08-01
Water is a ubiquitous liquid that displays a wide range of anomalous properties and has a delicate structure that challenges experiment and simulation alike. The various intermolecular interactions that play an important role, such as repulsion, polarization, hydrogen bonding, and van der Waals interactions, are often difficult to reproduce faithfully in atomistic models. Here, electronic structure theories including all these interactions at equal footing, which requires the inclusion of non-local electron correlation, are used to describe structure and dynamics of bulk liquid water. Isobaric-isothermal (NpT) ensemble simulations based on the Random Phase Approximation (RPA) yield excellent density (0.994 g/ml) and fair radial distribution functions, while various other density functional approximations produce scattered results (0.8-1.2 g/ml). Molecular dynamics simulation in the microcanonical (NVE) ensemble based on Møller-Plesset perturbation theory (MP2) yields dynamical properties in the condensed phase, namely, the infrared spectrum and diffusion constant. At the MP2 and RPA levels of theory, ice is correctly predicted to float on water, resolving one of the anomalies as resulting from a delicate balance between van der Waals and hydrogen bonding interactions. For several properties, obtaining quantitative agreement with experiment requires correction for nuclear quantum effects (NQEs), highlighting their importance, for structure, dynamics, and electronic properties. A computed NQE shift of 0.6 eV for the band gap and absorption spectrum illustrates the latter. Giving access to both structure and dynamics of condensed phase systems, non-local electron correlation will increasingly be used to study systems where weak interactions are of paramount importance. PMID:26254660
Del Ben, Mauro Hutter, Jürg; VandeVondele, Joost
2015-08-07
Water is a ubiquitous liquid that displays a wide range of anomalous properties and has a delicate structure that challenges experiment and simulation alike. The various intermolecular interactions that play an important role, such as repulsion, polarization, hydrogen bonding, and van der Waals interactions, are often difficult to reproduce faithfully in atomistic models. Here, electronic structure theories including all these interactions at equal footing, which requires the inclusion of non-local electron correlation, are used to describe structure and dynamics of bulk liquid water. Isobaric-isothermal (NpT) ensemble simulations based on the Random Phase Approximation (RPA) yield excellent density (0.994 g/ml) and fair radial distribution functions, while various other density functional approximations produce scattered results (0.8-1.2 g/ml). Molecular dynamics simulation in the microcanonical (NVE) ensemble based on Møller-Plesset perturbation theory (MP2) yields dynamical properties in the condensed phase, namely, the infrared spectrum and diffusion constant. At the MP2 and RPA levels of theory, ice is correctly predicted to float on water, resolving one of the anomalies as resulting from a delicate balance between van der Waals and hydrogen bonding interactions. For several properties, obtaining quantitative agreement with experiment requires correction for nuclear quantum effects (NQEs), highlighting their importance, for structure, dynamics, and electronic properties. A computed NQE shift of 0.6 eV for the band gap and absorption spectrum illustrates the latter. Giving access to both structure and dynamics of condensed phase systems, non-local electron correlation will increasingly be used to study systems where weak interactions are of paramount importance.
ERIC Educational Resources Information Center
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance…
ERIC Educational Resources Information Center
Criss, Michael M.; Shaw, Daniel S.; Moilanen, Kristin L.; Hitchings, Julia E.; Ingoldsby, Erin M.
2009-01-01
The purpose of this study was to test direct, additive, and mediation models involving family, neighborhood, and peer factors in relation to emerging antisocial behavior and social skills. Neighborhood danger, maternal depressive symptoms, and supportive parenting were assessed in early childhood. Peer group acceptance was measured in middle…
Adjustable Optical-Fiber Attenuator
NASA Technical Reports Server (NTRS)
Buzzetti, Mike F.
1994-01-01
Adjustable fiber-optic attenuator utilizes bending loss to reduce strength of light transmitted along it. Attenuator functions without introducing measurable back-reflection or insertion loss. Relatively insensitive to vibration and changes in temperature. Potential applications include cable television, telephone networks, other signal-distribution networks, and laboratory instrumentation.
Langseth, Brian J.; Jones, Michael L.; Riley, Stephen C.
2014-01-01
Ecopath with Ecosim (EwE) is a widely used modeling tool in fishery research and management. Ecopath requires a mass-balanced snapshot of a food web at a particular point in time, which Ecosim then uses to simulate changes in biomass over time. Initial inputs to Ecopath, including estimates for biomasses, production to biomass ratios, consumption to biomass ratios, and diets, rarely produce mass balance, and thus ad hoc changes to inputs are required to balance the model. There has been little previous research of whether ad hoc changes to achieve mass balance affect Ecosim simulations. We constructed an EwE model for the offshore community of Lake Huron, and balanced the model using four contrasting but realistic methods. The four balancing methods were based on two contrasting approaches; in the first approach, production of unbalanced groups was increased by increasing either biomass or the production to biomass ratio, while in the second approach, consumption of predators on unbalanced groups was decreased by decreasing either biomass or the consumption to biomass ratio. We compared six simulation scenarios based on three alternative assumptions about the extent to which mortality rates of prey can change in response to changes in predator biomass (i.e., vulnerabilities) under perturbations to either fishing mortality or environmental production. Changes in simulated biomass values over time were used in a principal components analysis to assess the comparative effect of balancing method, vulnerabilities, and perturbation types. Vulnerabilities explained the most variation in biomass, followed by the type of perturbation. Choice of balancing method explained little of the overall variation in biomass. Under scenarios where changes in predator biomass caused large changes in mortality rates of prey (i.e., high vulnerabilities), variation in biomass was greater than when changes in predator biomass caused only small changes in mortality rates of prey (i.e., low
Allen, Edward J; Allen, Linda J S; Schurz, Henri
2005-07-01
A discrete-time Markov chain model, a continuous-time Markov chain model, and a stochastic differential equation model are compared for a population experiencing demographic and environmental variability. It is assumed that the environment produces random changes in the per capita birth and death rates, which are independent from the inherent random (demographic) variations in the number of births and deaths for any time interval. An existence and uniqueness result is proved for the stochastic differential equation system. Similarities between the models are demonstrated analytically and computational results are provided to show that estimated persistence times for the three stochastic models are generally in good agreement when the models satisfy certain consistency conditions. PMID:15946709
González-Domínguez, Elisa; Caffi, Tito; Ciliberti, Nicola; Rossi, Vittorio
2015-01-01
A mechanistic model for Botrytis cinerea on grapevine was developed. The model, which accounts for conidia production on various inoculum sources and for multiple infection pathways, considers two infection periods. During the first period ("inflorescences clearly visible" to "berries groat-sized"), the model calculates: i) infection severity on inflorescences and young clusters caused by conidia (SEV1). During the second period ("majority of berries touching" to "berries ripe for harvest"), the model calculates: ii) infection severity of ripening berries by conidia (SEV2); and iii) severity of berry-to-berry infection caused by mycelium (SEV3). The model was validated in 21 epidemics (vineyard × year combinations) between 2009 and 2014 in Italy and France. A discriminant function analysis (DFA) was used to: i) evaluate the ability of the model to predict mild, intermediate, and severe epidemics; and ii) assess how SEV1, SEV2, and SEV3 contribute to epidemics. The model correctly classified the severity of 17 of 21 epidemics. Results from DFA were also used to calculate the daily probabilities that an ongoing epidemic would be mild, intermediate, or severe. SEV1 was the most influential variable in discriminating between mild and intermediate epidemics, whereas SEV2 and SEV3 were relevant for discriminating between intermediate and severe epidemics. The model represents an improvement of previous B. cinerea models in viticulture and could be useful for making decisions about Botrytis bunch rot control. PMID:26457808
González-Domínguez, Elisa; Caffi, Tito; Ciliberti, Nicola; Rossi, Vittorio
2015-01-01
A mechanistic model for Botrytis cinerea on grapevine was developed. The model, which accounts for conidia production on various inoculum sources and for multiple infection pathways, considers two infection periods. During the first period (“inflorescences clearly visible” to “berries groat-sized”), the model calculates: i) infection severity on inflorescences and young clusters caused by conidia (SEV1). During the second period (“majority of berries touching” to “berries ripe for harvest”), the model calculates: ii) infection severity of ripening berries by conidia (SEV2); and iii) severity of berry-to-berry infection caused by mycelium (SEV3). The model was validated in 21 epidemics (vineyard × year combinations) between 2009 and 2014 in Italy and France. A discriminant function analysis (DFA) was used to: i) evaluate the ability of the model to predict mild, intermediate, and severe epidemics; and ii) assess how SEV1, SEV2, and SEV3 contribute to epidemics. The model correctly classified the severity of 17 of 21 epidemics. Results from DFA were also used to calculate the daily probabilities that an ongoing epidemic would be mild, intermediate, or severe. SEV1 was the most influential variable in discriminating between mild and intermedi