Sample records for modeling analyses demonstrated

  1. 77 FR 41132 - Air Quality Implementation Plans; Alabama; Attainment Plan for the Alabama Portion of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-12

    ... modeling demonstration should include supporting technical analyses and descriptions of all relevant....5 and NO X . The attainment demonstration includes: Technical analyses that locate, identify, and... modeling analysis is a complex technical evaluation that began with selection of the modeling system. The...

  2. 40 CFR 51.1007 - Attainment demonstration and modeling requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Attainment demonstration and modeling... Implementation of PM2.5 National Ambient Air Quality Standards § 51.1007 Attainment demonstration and modeling... and must include inventory data, modeling results, and emission reduction analyses on which the State...

  3. Finite element analysis of structural engineering problems using a viscoplastic model incorporating two back stresses

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Halford, Gary R.

    1993-01-01

    The feasibility of a viscoplastic model incorporating two back stresses and a drag strength is investigated for performing nonlinear finite element analyses of structural engineering problems. To demonstrate suitability for nonlinear structural analyses, the model is implemented into a finite element program and analyses for several uniaxial and multiaxial problems are performed. Good agreement is shown between the results obtained using the finite element implementation and those obtained experimentally. The advantages of using advanced viscoplastic models for performing nonlinear finite element analyses of structural components are indicated.

  4. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  5. Evidence for the Continuous Latent Structure of Mania in the Epidemiologic Catchment Area from Multiple Latent Structure and Construct Validation Methodologies

    PubMed Central

    Prisciandaro, James J.; Roberts, John E.

    2011-01-01

    Background Although psychiatric diagnostic systems have conceptualized mania as a discrete phenomenon, appropriate latent structure investigations testing this conceptualization are lacking. In contrast to these diagnostic systems, several influential theories of mania have suggested a continuous conceptualization. The present study examined whether mania has a continuous or discrete latent structure using a comprehensive approach including taxometric, information-theoretic latent distribution modeling (ITLDM), and predictive validity methodologies in the Epidemiologic Catchment Area (ECA) study. Methods Eight dichotomous manic symptom items were submitted to a variety of latent structural analyses; including factor analyses, taxometric procedures, and ITLDM; in 10,105 ECA community participants. Additionally, a variety of continuous and discrete models of mania were compared in terms of their relative abilities to predict outcomes (i.e., health service utilization, internalizing and externalizing disorders, and suicidal behavior). Results Taxometric and ITLDM analyses consistently supported a continuous conceptualization of mania. In ITLDM analyses, a continuous model of mania demonstrated 6:52:1 odds over the best fitting latent class model of mania. Factor analyses suggested that the continuous structure of mania was best represented by a single latent factor. Predictive validity analyses demonstrated a consistent superior ability of continuous models of mania relative to discrete models. Conclusions The present study provided three independent lines of support for a continuous conceptualization of mania. The implications of a continuous model of mania are discussed. PMID:20507671

  6. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  7. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  8. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  9. Design of demand side response model in energy internet demonstration park

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Liu, D. N.

    2017-08-01

    The implementation of demand side response can bring a lot of benefits to the power system, users and society, but there are still many problems in the actual operation. Firstly, this paper analyses the current situation and problems of demand side response. On this basis, this paper analyses the advantages of implementing demand side response in the energy Internet demonstration park. Finally, the paper designs three kinds of feasible demand side response modes in the energy Internet demonstration park.

  10. A comprehensive pharmacokinetic/pharmacodynamics analysis of the novel IGF1R/INSR inhibitor BI 893923 applying in vitro, in vivo and in silico modeling techniques.

    PubMed

    Titze, Melanie I; Schaaf, Otmar; Hofmann, Marco H; Sanderson, Michael P; Zahn, Stephan K; Quant, Jens; Lehr, Thorsten

    2016-06-01

    BI 893923 is a novel IGF1R/INSR tyrosine kinase inhibitor demonstrating anti-tumor efficacy and good tolerability. We aimed to characterize the relationship between BI 893923 plasma concentration, tumor biomarker modulation, tumor growth and hyperglycemia in mice using in silico modeling analyses. In vitro molecular and cellular assays were used to demonstrate the potency and selectivity of BI 893923. Diverse in vitro DMPK assays were used to characterize the compound's drug-like properties. Mice xenografted with human GEO tumors were treated with different doses of BI 893923 to demonstrate the compound's efficacy, biomarker modulation and tolerability. PK/PD analyses were performed using nonlinear mixed-effects modeling. BI 893923 demonstrated potent and selective molecular inhibition of the IGF1R and INSR and demonstrated attractive drug-like properties (permeability, bioavailability). BI 893923 dose-dependently reduced GEO tumor growth and demonstrated good tolerability, characterized by transient hyperglycemia and normal body weight gain. A population PK/PD model was developed, which established relationships between BI 893923 pharmacokinetics, hyperglycemia, pIGF1R reduction and tumor growth. BI 893923 demonstrates molecular properties consistent with a highly attractive inhibitor of the IGF1R/INSR. A generic PK/PD model was developed to support preclinical drug development and dose finding in mice.

  11. 3-d finite element model development for biomechanics: a software demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollerbach, K.; Hollister, A.M.; Ashby, E.

    1997-03-01

    Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models,more » using human hand and knee examples, and will demonstrate their software tools.« less

  12. Regional analyses of labor markets and demography: a model based Norwegian example.

    PubMed

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  13. Development of steady-state model for MSPT and detailed analyses of receiver

    NASA Astrophysics Data System (ADS)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  14. SAM International Case Studies: DPV Analysis in Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCall, James D

    Presentation demonstrates the use of the System Advisor Model (SAM) in international analyses, specifically Mexico. Two analyses are discussed with relation to SAM modelling efforts: 1) Customer impacts from changes to net metering and billing agreements and 2) Potential benefits of PV for Mexican solar customers, the Mexican Treasury, and the environment. Along with the SAM analyses, integration of the International Utility Rate Database (I-URDB) with SAM and future international SAM work are discussed. Presentation was created for the International Solar Energy Society's (ISES) webinar titled 'International use of the NREL System Advisor Model (SAM) with case studies'.

  15. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  16. Developing the Model of Fuel Injection Process Efficiency Analysis for Injector for Diesel Engines

    NASA Astrophysics Data System (ADS)

    Anisimov, M. Yu; Kayukov, S. S.; Gorshkalev, A. A.; Belousov, A. V.; Gallyamov, R. E.; Lysenko, Yu D.

    2018-01-01

    The article proposes an assessment option for analysing the quality of fuel injection by the injector constituting the development of calculation blocks in a common injector model within LMS Imagine.Lab AMESim. The parameters of the injector model in the article correspond to the serial injector Common Rail-type with solenoid. The possibilities of this approach are demonstrated with providing the results using the example of modelling the modified injector. Following the research results, the advantages of the proposed approach to analysing assessing the fuel injection quality were detected.

  17. The Relationships between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    ERIC Educational Resources Information Center

    Mendonça, Paula Cristina Cardoso; Justi, Rosária

    2013-01-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities…

  18. Geometric Analyses of Rotational Faults.

    ERIC Educational Resources Information Center

    Schwert, Donald Peters; Peck, Wesley David

    1986-01-01

    Describes the use of analysis of rotational faults in undergraduate structural geology laboratories to provide students with applications of both orthographic and stereographic techniques. A demonstration problem is described, and an orthographic/stereographic solution and a reproducible black model demonstration pattern are provided. (TW)

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, Christopher J.; Freels, James D.; Hobbs, Randy W.

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 ( 238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 ( 237Np) dioxide (NpO 2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated inmore » a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO 2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.« less

  20. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  1. Demonstrating Rapid Qualitative Elemental Analyses of Participant-Supplied Objects at a Public Outreach Event

    ERIC Educational Resources Information Center

    Schwarz, Gunnar; Burger, Marcel; Guex, Kevin; Gundlach-Graham, Alexander; Ka¨ser, Debora; Koch, Joachim; Velicsanyi, Peter; Wu, Chung-Che; Gu¨nther, Detlef; Hattendorf, Bodo

    2016-01-01

    A public demonstration of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) for fast and sensitive qualitative elemental analysis of solid everyday objects is described. This demonstration served as a showcase model for modern instrumentation (and for elemental analysis, in particular) to the public. Several steps were made to…

  2. Polytomous Rasch Models in Counseling Assessment

    ERIC Educational Resources Information Center

    Willse, John T.

    2017-01-01

    This article provides a brief introduction to the Rasch model. Motivation for using Rasch analyses is provided. Important Rasch model concepts and key aspects of result interpretation are introduced, with major points reinforced using a simulation demonstration. Concrete guidelines are provided regarding sample size and the evaluation of items.

  3. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  4. 77 FR 69399 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Attainment Plan for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-19

    ... results of speciation data analyses, air quality modeling studies, chemical tracer studies, emission... Demonstration 1. Pollutants Addressed 2. Emission Inventory Requirements 3. Modeling 4. Reasonably Available... modeling (40 CFR 51.1007) that is performed in accordance with EPA modeling guidance (EPA-454/B-07-002...

  5. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    USGS Publications Warehouse

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  6. Statistical methods for incomplete data: Some results on model misspecification.

    PubMed

    McIsaac, Michael; Cook, R J

    2017-02-01

    Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.

  7. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  8. Training Opportunities and Employee Exhaustion in Call Centres: Mediation by Psychological Contract Fulfilment

    ERIC Educational Resources Information Center

    Chambel, Maria Jose; Castanheira, Filipa

    2012-01-01

    The aim of this study is to analyse psychological contract fulfilment as a mechanism through which training affects stress in call centres. The hypotheses were tested on a sample of 412 call centre operators, using structural equation modelling to analyse their survey responses. Our results demonstrated that training is negatively related to…

  9. Prospect Theory and Coercive Bargaining

    ERIC Educational Resources Information Center

    Butler, Christopher K.

    2007-01-01

    Despite many applications of prospect theory's concepts to explain political and strategic phenomena, formal analyses of strategic problems using prospect theory are rare. Using Fearon's model of bargaining, Tversky and Kahneman's value function, and an existing probability weighting function, I construct a model that demonstrates the differences…

  10. Ambiguities in model-independent partial-wave analysis

    NASA Astrophysics Data System (ADS)

    Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.

    2018-06-01

    Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.

  11. Model-size reduction for the buckling and vibration analyses of anisotropic panels

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Whitworth, S. L.

    1986-01-01

    A computational procedure is presented for reducing the size of the model used in the buckling and vibration analyses of symmetric anisotropic panels to that of the corresponding orthotropic model. The key elements of the procedure are the application of an operator splitting technique through the decomposition of the material stiffness matrix of the panel into the sum of orthotropic and nonorthotropic (anisotropic) parts and the use of a reduction method through successive application of the finite element method and the classical Rayleigh-Ritz technique. The effectiveness of the procedure is demonstrated by numerical examples.

  12. The importance of age-related differences in prospective memory: Evidence from diffusion model analyses.

    PubMed

    Ball, B Hunter; Aschenbrenner, Andrew J

    2017-06-09

    Event-based prospective memory (PM) refers to relying on environmental cues to trigger retrieval of a deferred action plan from long-term memory. Considerable research has demonstrated PM declines with increased age. Despite efforts to better characterize the attentional processes that underlie these decrements, the majority of research has relied on measures of central tendency to inform theoretical accounts of PM that may not entirely capture the underlying dynamics involved in allocating attention to intention-relevant information. The purpose of the current study was to examine the utility of the diffusion model to better understand the cognitive processes underlying age-related differences in PM. Results showed that emphasizing the importance of the PM intention increased cue detection selectively for older adults. Standard cost analyses revealed that PM importance increased mean response times and accuracy, but not differentially for young and older adults. Consistent with this finding, diffusion model analyses demonstrated that PM importance increased response caution as evidenced by increased boundary separation. However, the selective benefit in cue detection for older adults may reflect peripheral target-checking processes as indicated by changes in nondecision time. These findings highlight the use of modeling techniques to better characterize the processes underlying the relations among aging, attention, and PM.

  13. Causal Mediation Analysis of Survival Outcome with Multiple Mediators.

    PubMed

    Huang, Yen-Tsung; Yang, Hwai-I

    2017-05-01

    Mediation analyses have been a popular approach to investigate the effect of an exposure on an outcome through a mediator. Mediation models with multiple mediators have been proposed for continuous and dichotomous outcomes. However, development of multimediator models for survival outcomes is still limited. We present methods for multimediator analyses using three survival models: Aalen additive hazard models, Cox proportional hazard models, and semiparametric probit models. Effects through mediators can be characterized by path-specific effects, for which definitions and identifiability assumptions are provided. We derive closed-form expressions for path-specific effects for the three models, which are intuitively interpreted using a causal diagram. Mediation analyses using Cox models under the rare-outcome assumption and Aalen additive hazard models consider effects on log hazard ratio and hazard difference, respectively; analyses using semiparametric probit models consider effects on difference in transformed survival time and survival probability. The three models were applied to a hepatitis study where we investigated effects of hepatitis C on liver cancer incidence mediated through baseline and/or follow-up hepatitis B viral load. The three methods show consistent results on respective effect scales, which suggest an adverse estimated effect of hepatitis C on liver cancer not mediated through hepatitis B, and a protective estimated effect mediated through the baseline (and possibly follow-up) of hepatitis B viral load. Causal mediation analyses of survival outcome with multiple mediators are developed for additive hazard and proportional hazard and probit models with utility demonstrated in a hepatitis study.

  14. Seal Analysis for the Ares-I Upper Stage Fuel Tank Manhole Cover

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Wingate, Robert J.

    2010-01-01

    Techniques for studying the performance of Naflex pressure-assisted seals in the Ares-I Upper Stage liquid hydrogen tank manhole cover seal joint are explored. To assess the feasibility of using the identical seal design for the Upper Stage as was used for the Space Shuttle External Tank manhole covers, a preliminary seal deflection analysis using the ABAQUS commercial finite element software is employed. The ABAQUS analyses are performed using three-dimensional symmetric wedge finite element models. This analysis technique is validated by first modeling a heritage External Tank liquid hydrogen tank manhole cover joint and correlating the results to heritage test data. Once the technique is validated, the Upper Stage configuration is modeled. The Upper Stage analyses are performed at 1.4 times the expected pressure to comply with the Constellation Program factor of safety requirement on joint separation. Results from the analyses performed with the External Tank and Upper Stage models demonstrate the effects of several modeling assumptions on the seal deflection. The analyses for Upper Stage show that the integrity of the seal is successfully maintained.

  15. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach

    PubMed Central

    Senior, Alistair M.; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J.

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments. PMID:26858671

  16. Over-fitting Time Series Models of Air Pollution Health Effects: Smoothing Tends to Bias Non-Null Associations Towards the Null.

    EPA Science Inventory

    Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...

  17. Does Attention-Deficit/Hyperactivity Disorder Have a Dimensional Latent Structure? A Taxometric Analysis

    PubMed Central

    Marcus, David K.; Barry, Tammy D.

    2010-01-01

    An understanding of the latent structure of attention-deficit/hyperactivity disorder (ADHD) is essential for developing causal models of this disorder. Although some researchers have presumed that ADHD is dimensional and others have assumed that it is taxonic, there has been relatively little research directly examining the latent structure of ADHD. The authors conducted a set of taxometric analyses using data from the NICHD Study of Early Child Care and Youth Development (ns between 667–1078). The results revealed a dimensional latent structure across a variety of different analyses and sets of indicators, for inattention, hyperactivity/impulsivity, and ADHD. Furthermore, analyses of correlations with associated features indicated that dimensional models demonstrated stronger validity coefficients with these criterion measures than dichotomous models. These findings jibe with recent research on the genetic basis of ADHD and with contemporary models of ADHD. PMID:20973595

  18. Does attention-deficit/hyperactivity disorder have a dimensional latent structure? A taxometric analysis.

    PubMed

    Marcus, David K; Barry, Tammy D

    2011-05-01

    An understanding of the latent structure of attention-deficit/hyperactivity disorder (ADHD) is essential for developing causal models of this disorder. Although some researchers have presumed that ADHD is dimensional and others have assumed that it is taxonic, there has been relatively little research directly examining the latent structure of ADHD. The authors conducted a set of taxometric analyses using data from the NICHD Study of Early Child Care and Youth Development (ns between 667 and 1,078). The results revealed a dimensional latent structure across a variety of different analyses and sets of indicators for inattention, hyperactivity/impulsivity, and ADHD. Furthermore, analyses of correlations with associated features indicated that dimensional models demonstrated stronger validity coefficients with these criterion measures than dichotomous models. These findings jibe with recent research on the genetic basis of ADHD and with contemporary models of ADHD.

  19. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  20. Are Gender Differences in Perceived and Demonstrated Technology Literacy Significant? It Depends on the Model

    ERIC Educational Resources Information Center

    Hohlfeld, Tina N.; Ritzhaupt, Albert D.; Barron, Ann E.

    2013-01-01

    This paper examines gender differences related to Information and Communication Technology (ICT) literacy using two valid and internally consistent measures with eighth grade students (N = 1,513) from Florida public schools. The results of t test statistical analyses, which examined only gender differences in demonstrated and perceived ICT skills,…

  1. Effects of Condensation on Peri-implant Bone Density and Remodeling

    PubMed Central

    Wang, L.; Wu, Y.; Perez, K.C.; Hyman, S.; Brunski, J.B.; Tulu, U.; Bao, C.; Salmon, B.; Helms, J.A.

    2017-01-01

    Bone condensation is thought to densify interfacial bone and thus improve implant primary stability, but scant data substantiate either claim. We developed a murine oral implant model to test these hypotheses. Osteotomies were created in healed maxillary extraction sites 1) by drilling or 2) by drilling followed by stepwise condensation with tapered osteotomes. Condensation increased interfacial bone density, as measured by a significant change in bone volume/total volume and trabecular spacing, but it simultaneously damaged the bone. On postimplant day 1, the condensed bone interface exhibited microfractures and osteoclast activity. Finite element modeling, mechanical testing, and immunohistochemical analyses at multiple time points throughout the osseointegration period demonstrated that condensation caused very high interfacial strains, marginal bone resorption, and no improvement in implant stability. Collectively, these multiscale analyses demonstrate that condensation does not positively contribute to implant stability. PMID:28048963

  2. Effects of Condensation on Peri-implant Bone Density and Remodeling.

    PubMed

    Wang, L; Wu, Y; Perez, K C; Hyman, S; Brunski, J B; Tulu, U; Bao, C; Salmon, B; Helms, J A

    2017-04-01

    Bone condensation is thought to densify interfacial bone and thus improve implant primary stability, but scant data substantiate either claim. We developed a murine oral implant model to test these hypotheses. Osteotomies were created in healed maxillary extraction sites 1) by drilling or 2) by drilling followed by stepwise condensation with tapered osteotomes. Condensation increased interfacial bone density, as measured by a significant change in bone volume/total volume and trabecular spacing, but it simultaneously damaged the bone. On postimplant day 1, the condensed bone interface exhibited microfractures and osteoclast activity. Finite element modeling, mechanical testing, and immunohistochemical analyses at multiple time points throughout the osseointegration period demonstrated that condensation caused very high interfacial strains, marginal bone resorption, and no improvement in implant stability. Collectively, these multiscale analyses demonstrate that condensation does not positively contribute to implant stability.

  3. A mathematical model of the mevalonate cholesterol biosynthesis pathway.

    PubMed

    Pool, Frances; Currie, Richard; Sweby, Peter K; Salazar, José Domingo; Tindall, Marcus J

    2018-04-14

    We formulate, parameterise and analyse a mathematical model of the mevalonate pathway, a key pathway in the synthesis of cholesterol. Of high clinical importance, the pathway incorporates rate limiting enzymatic reactions with multiple negative feedbacks. In this work we investigate the pathway dynamics and demonstrate that rate limiting steps and negative feedbacks within it act in concert to tightly regulate intracellular cholesterol levels. Formulated using the theory of nonlinear ordinary differential equations and parameterised in the context of a hepatocyte, the governing equations are analysed numerically and analytically. Sensitivity and mathematical analysis demonstrate the importance of the two rate limiting enzymes 3-hydroxy-3-methylglutaryl-CoA reductase and squalene synthase in controlling the concentration of substrates within the pathway as well as that of cholesterol. The role of individual feedbacks, both global (between that of cholesterol and sterol regulatory element-binding protein 2; SREBP-2) and local internal (between substrates in the pathway) are investigated. We find that whilst the cholesterol SREBP-2 feedback regulates the overall system dynamics, local feedbacks activate within the pathway to tightly regulate the overall cellular cholesterol concentration. The network stability is analysed by constructing a reduced model of the full pathway and is shown to exhibit one real, stable steady-state. We close by addressing the biological question as to how farnesyl-PP levels are affected by CYP51 inhibition, and demonstrate that the regulatory mechanisms within the network work in unison to ensure they remain bounded. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Application of Interface Technology in Progressive Failure Analysis of Composite Panels

    NASA Technical Reports Server (NTRS)

    Sleight, D. W.; Lotts, C. G.

    2002-01-01

    A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.

  5. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    PubMed

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  6. Thermal analysis of combinatorial solid geometry models using SINDA

    NASA Technical Reports Server (NTRS)

    Gerencser, Diane; Radke, George; Introne, Rob; Klosterman, John; Miklosovic, Dave

    1993-01-01

    Algorithms have been developed using Monte Carlo techniques to determine the thermal network parameters necessary to perform a finite difference analysis on Combinatorial Solid Geometry (CSG) models. Orbital and laser fluxes as well as internal heat generation are modeled to facilitate satellite modeling. The results of the thermal calculations are used to model the infrared (IR) images of targets and assess target vulnerability. Sample analyses and validation are presented which demonstrate code products.

  7. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  8. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  9. Computational Modelling and Children's Expressions of Signal and Noise

    ERIC Educational Resources Information Center

    Ainley, Janet; Pratt, Dave

    2017-01-01

    Previous research has demonstrated how young children can identify the signal in data. In this exploratory study we considered how they might also express meanings for noise when creating computational models using recent developments in software tools. We conducted extended clinical interviews with four groups of 11-year-olds and analysed the…

  10. Modifications of steam condensation model implemented in commercial solver

    NASA Astrophysics Data System (ADS)

    Sova, Libor; Jun, Gukchol; ŠÅ¥astný, Miroslav

    2017-09-01

    Nucleation theory and droplet grow theory and methods how they are incorporated into numerical solvers are crucial factors for proper wet steam modelling. Unfortunately, they are still covered by cloud of uncertainty and therefore some calibration of these models according to reliable experimental results is important for practical analyses of steam turbines. This article demonstrates how is possible to calibrate wet steam model incorporated into commercial solver ANSYS CFX.

  11. Development and Application of a Category System to Describe Pre-Service Science Teachers' Activities in the Process of Scientific Modelling

    NASA Astrophysics Data System (ADS)

    Krell, Moritz; Walzer, Christine; Hergert, Susann; Krüger, Dirk

    2017-09-01

    As part of their professional competencies, science teachers need an elaborate meta-modelling knowledge as well as modelling skills in order to guide and monitor modelling practices of their students. However, qualitative studies about (pre-service) science teachers' modelling practices are rare. This study provides a category system which is suitable to analyse and to describe pre-service science teachers' modelling activities and to infer modelling strategies. The category system was developed based on theoretical considerations and was inductively refined within the methodological frame of qualitative content analysis. For the inductive refinement, modelling practices of pre-service teachers (n = 4) have been video-taped and analysed. In this study, one case was selected to demonstrate the application of the category system to infer modelling strategies. The contribution of this study for science education research and science teacher education is discussed.

  12. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  13. Break modeling for RELAP5 analyses of ISP-27 Bethsy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petelin, S.; Gortnar, O.; Mavko, B.

    This paper presents pre- and posttest analyses of International Standard Problem (ISP) 27 on the Bethsy facility and separate RELAP5 break model tests considering the measured boundary condition at break inlet. This contribution also demonstrates modifications which have assured the significant improvement of model response in posttest simulations. Calculations were performed using the RELAP5/MOD2/36.05 and RELAP5/MOD3.5M5 codes on the MicroVAX, SUN, and CONVEX computers. Bethsy is an integral test facility that simulates a typical 900-MW (electric) Framatome pressurized water reactor. The ISP-27 scenario involves a 2-in. cold-leg break without HPSI and with delayed operator procedures for secondary system depressurization.

  14. Reduced size first-order subsonic and supersonic aeroelastic modeling

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.

  15. Integrating Cognitive Views into Psychometric Models for Reading Comprehension Assessment. Research Report. ETS RR-17-35

    ERIC Educational Resources Information Center

    Rahman, Taslima; Mislevy, Robert J.

    2017-01-01

    To demonstrate how methodologies for assessing reading comprehension can grow out of views of the construct suggested in the reading research literature, we constructed tasks and carried out psychometric analyses that were framed in accordance with 2 leading reading models. In estimating item difficulty and subsequently, examinee proficiency, an…

  16. A metabolite-centric view on flux distributions in genome-scale metabolic models

    PubMed Central

    2013-01-01

    Background Genome-scale metabolic models are important tools in systems biology. They permit the in-silico prediction of cellular phenotypes via mathematical optimisation procedures, most importantly flux balance analysis. Current studies on metabolic models mostly consider reaction fluxes in isolation. Based on a recently proposed metabolite-centric approach, we here describe a set of methods that enable the analysis and interpretation of flux distributions in an integrated metabolite-centric view. We demonstrate how this framework can be used for the refinement of genome-scale metabolic models. Results We applied the metabolite-centric view developed here to the most recent metabolic reconstruction of Escherichia coli. By compiling the balance sheets of a small number of currency metabolites, we were able to fully characterise the energy metabolism as predicted by the model and to identify a possibility for model refinement in NADPH metabolism. Selected branch points were examined in detail in order to demonstrate how a metabolite-centric view allows identifying functional roles of metabolites. Fructose 6-phosphate aldolase and the sedoheptulose bisphosphate bypass were identified as enzymatic reactions that can carry high fluxes in the model but are unlikely to exhibit significant activity in vivo. Performing a metabolite essentiality analysis, unconstrained import and export of iron ions could be identified as potentially problematic for the quality of model predictions. Conclusions The system-wide analysis of split ratios and branch points allows a much deeper insight into the metabolic network than reaction-centric analyses. Extending an earlier metabolite-centric approach, the methods introduced here establish an integrated metabolite-centric framework for the interpretation of flux distributions in genome-scale metabolic networks that can complement the classical reaction-centric framework. Analysing fluxes and their metabolic context simultaneously opens the door to systems biological interpretations that are not apparent from isolated reaction fluxes. Particularly powerful demonstrations of this are the analyses of the complete metabolic contexts of energy metabolism and the folate-dependent one-carbon pool presented in this work. Finally, a metabolite-centric view on flux distributions can guide the refinement of metabolic reconstructions for specific growth scenarios. PMID:23587327

  17. The Success of Linear Bootstrapping Models: Decision Domain-, Expertise-, and Criterion-Specific Meta-Analysis

    PubMed Central

    Kaufmann, Esther; Wittmann, Werner W.

    2016-01-01

    The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl’s (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study, we addressed these research gaps by conducting a meta-analysis of lens model studies. We compared the results of a traditional (bare-bones) meta-analysis with findings of a meta-analysis of the success of bootstrap models corrected for various methodological artifacts. In line with previous studies, we found that bootstrapping was more successful than human judgment. Furthermore, bootstrapping was more successful in studies with an objective decision criterion than in studies with subjective or test score criteria. We did not find clear evidence that the success of bootstrapping depended on the decision domain (e.g., education or medicine) or on the judge’s level of expertise (novice or expert). Correction of methodological artifacts increased the estimated success of bootstrapping, suggesting that previous analyses without artifact correction (i.e., traditional meta-analyses) may have underestimated the value of bootstrapping models. PMID:27327085

  18. Scripting MODFLOW model development using Python and FloPy

    USGS Publications Warehouse

    Bakker, Mark; Post, Vincent E. A.; Langevin, Christian D.; Hughes, Joseph D.; White, Jeremy; Starn, Jeffrey; Fienen, Michael N.

    2016-01-01

    Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy.

  19. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  20. Site-Specific Analyses for Demonstrating Compliance with 10 CFR 61 Performance Objectives - 12179

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grossman, C.J.; Esh, D.W.; Yadav, P.

    2012-07-01

    The U.S. Nuclear Regulatory Commission (NRC) is proposing to amend its regulations at 10 CFR Part 61 to require low-level radioactive waste disposal facilities to conduct site-specific analyses to demonstrate compliance with the performance objectives in Subpart C. The amendments would require licensees to conduct site-specific analyses for protection of the public and inadvertent intruders as well as analyses for long-lived waste. The amendments would ensure protection of public health and safety, while providing flexibility to demonstrate compliance with the performance objectives, for current and potential future waste streams. NRC staff intends to submit proposed rule language and associated regulatorymore » basis to the Commission for its approval in early 2012. The NRC staff also intends to develop associated guidance to accompany any proposed amendments. The guidance is intended to supplement existing low-level radioactive waste guidance on issues pertinent to conducting site-specific analyses to demonstrate compliance with the performance objectives. The guidance will facilitate implementation of the proposed amendments by licensees and assist competent regulatory authorities in reviewing the site-specific analyses. Specifically, the guidance provides staff recommendations on general considerations for the site-specific analyses, modeling issues for assessments to demonstrate compliance with the performance objectives including the performance assessment, intruder assessment, stability assessment, and analyses for long-lived waste. This paper describes the technical basis for changes to the rule language and the proposed guidance associated with implementation of the rule language. The NRC staff, per Commission direction, intends to propose amendments to 10 CFR Part 61 to require licensees to conduct site-specific analyses to demonstrate compliance with performance objectives for the protection of public health and the environment. The amendments would require a performance assessment to demonstrate protection of the general population from releases of radioactivity, an assessment to demonstrate protection of a potential inadvertent intruder, and a long-term analysis to assess how the design of the facility considers the potential radiological impacts associated with disposal of long-lived waste streams. Concurrently, the NRC staff intends to propose associated guidance to facilitate the implementation of the requirements to conduct site-specific analyses. In proposing these amendments to the regulation and associated guidance, the NRC staff has conducted extensive public outreach since 2009 including three public meetings and four briefings of the NRC's Advisory Committee on Reactor Safeguards. The NRC staff plans to submit the proposed amendments to the regulations to the Commission in early 2012. Subsequently, the proposed amendments and associated guidance would be published in the Federal Register for public comment pending approval of the proposed amendments to the regulations by the Commission. Following the public comment period, NRC staff plans to address public comments and revise, as necessary, the regulations and associated guidance before publishing a final rule, which is anticipated in 2013. (authors)« less

  1. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.; Cassenti, B. N.

    1985-01-01

    This report presents the results of the second year of work on a problem which is part of the NASA HOST Program. Its goals are: (1) to develop and validate unified constitutive models for isotropic materials, and (2) to demonstrate their usefulness for structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are that of Bodner-Partom and Walker. For model evaluation purposes, a large constitutive data base is generated for a B1900 + Hf alloy by performing uniaxial tensile, creep, cyclic, stress relation, and thermomechanical fatigue (TMF) tests as well as biaxial (tension/torsion) tests under proportional and nonproportional loading over a wide range of strain rates and temperatures. Systematic approaches for evaluating material constants from a small subset of the data base are developed. Correlations of the uniaxial and biaxial tests data with the theories of Bodner-Partom and Walker are performed to establish the accuracy, range of applicability, and integability of the models. Both models are implemented in the MARC finite element computer code and used for TMF analyses. Benchmark notch round experiments are conducted and the results compared with finite-element analyses using the MARC code and the Walker model.

  2. Multidimensional Latent Markov Models in a Developmental Study of Inhibitory Control and Attentional Flexibility in Early Childhood

    ERIC Educational Resources Information Center

    Bartolucci, Francesco; Solis-Trapala, Ivonne L.

    2010-01-01

    We demonstrate the use of a multidimensional extension of the latent Markov model to analyse data from studies with repeated binary responses in developmental psychology. In particular, we consider an experiment based on a battery of tests which was administered to pre-school children, at three time periods, in order to measure their inhibitory…

  3. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  4. Capturing strain localization behind a geosynthetic-reinforced soil wall

    NASA Astrophysics Data System (ADS)

    Lai, Timothy Y.; Borja, Ronaldo I.; Duvernay, Blaise G.; Meehan, Richard L.

    2003-04-01

    This paper presents the results of finite element (FE) analyses of shear strain localization that occurred in cohesionless soils supported by a geosynthetic-reinforced retaining wall. The innovative aspects of the analyses include capturing of the localized deformation and the accompanying collapse mechanism using a recently developed embedded strong discontinuity model. The case study analysed, reported in previous publications, consists of a 3.5-m tall, full-scale reinforced wall model deforming in plane strain and loaded by surcharge at the surface to failure. Results of the analysis suggest strain localization developing from the toe of the wall and propagating upward to the ground surface, forming a curved failure surface. This is in agreement with a well-documented failure mechanism experienced by the physical wall model showing internal failure surfaces developing behind the wall as a result of the surface loading. Important features of the analyses include mesh sensitivity studies and a comparison of the localization properties predicted by different pre-localization constitutive models, including a family of three-invariant elastoplastic constitutive models appropriate for frictional/dilatant materials. Results of the analysis demonstrate the potential of the enhanced FE method for capturing a collapse mechanism characterized by the presence of a failure, or slip, surface through earthen materials.

  5. Demonstration optimization analyses of pumping from selected Arapahoe aquifer municipal wells in the west-central Denver Basin, Colorado, 2010–2109

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.

    2012-01-01

    Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied by depleting overlying hydrogeologic units, substantially increasing the rate of decline of hydraulic heads in parts of the overlying Denver aquifer.

  6. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  7. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  8. A physically-based continuum damage mechanics model for numerical prediction of damage growth in laminated composite plates

    NASA Astrophysics Data System (ADS)

    Williams, Kevin Vaughan

    Rapid growth in use of composite materials in structural applications drives the need for a more detailed understanding of damage tolerant and damage resistant design. Current analytical techniques provide sufficient understanding and predictive capabilities for application in preliminary design, but current numerical models applicable to composites are few and far between and their development into well tested, rigorous material models is currently one of the most challenging fields in composite materials. The present work focuses on the development, implementation, and verification of a plane-stress continuum damage mechanics based model for composite materials. A physical treatment of damage growth based on the extensive body of experimental literature on the subject is combined with the mathematical rigour of a continuum damage mechanics description to form the foundation of the model. The model has been implemented in the LS-DYNA3D commercial finite element hydrocode and the results of the application of the model are shown to be physically meaningful and accurate. Furthermore it is demonstrated that the material characterization parameters can be extracted from the results of standard test methodologies for which a large body of published data already exists for many materials. Two case studies are undertaken to verify the model by comparison with measured experimental data. The first series of analyses demonstrate the ability of the model to predict the extent and growth of damage in T800/3900-2 carbon fibre reinforced polymer (CFRP) plates subjected to normal impacts over a range of impact energy levels. The predicted force-time and force-displacement response of the panels compare well with experimental measurements. The damage growth and stiffness reduction properties of the T800/3900-2 CFRP are derived using published data from a variety of sources without the need for parametric studies. To further demonstrate the physical nature of the model, a IM6/937 CFRP with a more brittle matrix system than 3900-2 is also analysed. Results of analyses performed under the same impact conditions do not compare as well quantitatively with measurements but the results are still promising and qualitative differences between the T800/3900-2 and IM6/937 are accurately captured. Finally, to further demonstrate the capability of the model, the response of a notched CFRP plate under quasi-static tensile loading is simulated and compared to experimental measurements. Of particular significance is the fact that the experimental test modelled in this case is uniquely suited to the characterization of the strain softening phenomenon observed in FRP laminates. Results of this virtual experiment compare very favourably with the measured damage growth and force-displacement curves.

  9. Control of clustered action potential firing in a mathematical model of entorhinal cortex stellate cells.

    PubMed

    Tait, Luke; Wedgwood, Kyle; Tsaneva-Atanasova, Krasimira; Brown, Jon T; Goodfellow, Marc

    2018-07-14

    The entorhinal cortex is a crucial component of our memory and spatial navigation systems and is one of the first areas to be affected in dementias featuring tau pathology, such as Alzheimer's disease and frontotemporal dementia. Electrophysiological recordings from principle cells of medial entorhinal cortex (layer II stellate cells, mEC-SCs) demonstrate a number of key identifying properties including subthreshold oscillations in the theta (4-12 Hz) range and clustered action potential firing. These single cell properties are correlated with network activity such as grid firing and coupling between theta and gamma rhythms, suggesting they are important for spatial memory. As such, experimental models of dementia have revealed disruption of organised dorsoventral gradients in clustered action potential firing. To better understand the mechanisms underpinning these different dynamics, we study a conductance based model of mEC-SCs. We demonstrate that the model, driven by extrinsic noise, can capture quantitative differences in clustered action potential firing patterns recorded from experimental models of tau pathology and healthy animals. The differential equation formulation of our model allows us to perform numerical bifurcation analyses in order to uncover the dynamic mechanisms underlying these patterns. We show that clustered dynamics can be understood as subcritical Hopf/homoclinic bursting in a fast-slow system where the slow sub-system is governed by activation of the persistent sodium current and inactivation of the slow A-type potassium current. In the full system, we demonstrate that clustered firing arises via flip bifurcations as conductance parameters are varied. Our model analyses confirm the experimentally suggested hypothesis that the breakdown of clustered dynamics in disease occurs via increases in AHP conductance. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. A Surprising Effect of Feedback on Learning

    ERIC Educational Resources Information Center

    Vollmeyer, Regina; Rheinberg, Falko

    2005-01-01

    As meta-analyses demonstrate feedback effects on performance, our study examined possible mediators. Based on our cognitive-motivational model [Vollmeyer, R., & Rheinberg, F. (1998). Motivationale Einflusse auf Erwerb und Anwendung von Wissen in einem computersimulierten System [Motivational influences on the acquisition and application of…

  11. Fires in storages of LFO: Analysis of hazard of structural collapse of steel-aluminium containers.

    PubMed

    Rebec, A; Kolšek, J; Plešec, P

    2016-04-05

    Pool fires of light fuel oil (LFO) in above-ground storages with steel-aluminium containers are discussed. A model is developed for assessments of risks of between-tank fire spread. Radiative effects of the flame body are accounted for by a solid flame radiation model. Thermal profiles evolved due to fire in the adjacent tanks and their consequential structural response is pursued in an exact (materially and geometrically non-linear) manner. The model's derivation is demonstrated on the LFO tank storage located near the Port of Koper (Slovenia). In support of the model, data from literature are adopted where appropriate. Analytical expressions are derived correspondingly for calculations of emissive characteristics of LFO pool fires. Additional data are collected from experiments. Fire experiments conducted on 300cm diameter LFO pans and at different wind speeds and high-temperature uniaxial tension tests of the analysed aluminium alloys types 3xxx and 6xxx are presented. The model is of an immediate fire engineering practical value (risk analyses) or can be used for further research purposes (e.g. sensitivity and parametric studies). The latter use is demonstrated in the final part of the paper discussing possible effects of high-temperature creep of 3xxx aluminium. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobromir Panayotov; Andrew Grief; Brad J. Merrill

    'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less

  13. Time to angiographic reperfusion in acute ischemic stroke: decision analysis.

    PubMed

    Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H

    2014-12-01

    Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.

  14. Distinguishing State Variability From Trait Change in Longitudinal Data: The Role of Measurement (Non)Invariance in Latent State-Trait Analyses

    PubMed Central

    Geiser, Christian; Keller, Brian T.; Lockhart, Ginger; Eid, Michael; Cole, David A.; Koch, Tobias

    2014-01-01

    Researchers analyzing longitudinal data often want to find out whether the process they study is characterized by (1) short-term state variability, (2) long-term trait change, or (3) a combination of state variability and trait change. Classical latent state-trait (LST) models are designed to measure reversible state variability around a fixed set-point or trait, whereas latent growth curve (LGC) models focus on long-lasting and often irreversible trait changes. In the present paper, we contrast LST and LGC models from the perspective of measurement invariance (MI) testing. We show that establishing a pure state-variability process requires (a) the inclusion of a mean structure and (b) establishing strong factorial invariance in LST analyses. Analytical derivations and simulations demonstrate that LST models with non-invariant parameters can mask the fact that a trait-change or hybrid process has generated the data. Furthermore, the inappropriate application of LST models to trait change or hybrid data can lead to bias in the estimates of consistency and occasion-specificity, which are typically of key interest in LST analyses. Four tips for the proper application of LST models are provided. PMID:24652650

  15. RAM simulation model for SPH/RSV systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Primm, A.H.; Nelson, S.C.

    1995-12-31

    The US Army`s Project Manager, Crusader is sponsoring the development of technologies that apply to the Self-Propelled Howitzer (SPH), formerly the Advanced Field Artillery System (AFAS), and Resupply Vehicle (RSV), formerly the Future Armored Resupply Vehicle (FARV), weapon system. Oak Ridge National Laboratory (ORNL) is currently performing developmental work in support of the SPH/PSV Crusader system. Supportive analyses of reliability, availability, and maintainability (RAM) aspects were also performed for the SPH/RSV effort. During FY 1994 and FY 1995 OPNL conducted a feasibility study to demonstrate the application of simulation modeling for RAM analysis of the Crusader system. Following completion ofmore » the feasibility study, a full-scale RAM simulation model of the Crusader system was developed for both the SPH and PSV. This report provides documentation for the simulation model as well as instructions in the proper execution and utilization of the model for the conduct of RAM analyses.« less

  16. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  17. The evolution of risk and bailout strategy in banking systems

    NASA Astrophysics Data System (ADS)

    De Caux, Robert; McGroarty, Frank; Brede, Markus

    2017-02-01

    In this paper we analyse the long-term costs and benefits of bailout strategies in models of networked banking systems. Unlike much of the current literature on financial contagion that focuses on systemic risk at one point in time, we consider adaptive banks that adjust risk taking in response to internal system dynamics and regulatory intervention, allowing us to analyse the potentially crucial moral hazard aspect associated with frequent bailouts. We demonstrate that whereas bailout generally serves as an effective tool to limit the size of bankruptcy cascades in the short term, inappropriate intervention strategies can encourage risk-taking and thus be inefficient and detrimental to long term system stability. We analyse points of long-term optimal bailout and discuss their dependence on the structure of the banking network. In the second part of the paper, we demonstrate that bailout efficiency can be improved by taking into account information about the topology of and risk allocation on the banking network, and demonstrate that finely tuned intervention strategies aimed at bailing out banks in configurations with some degree of anti-correlated risk have superior performance. These results demonstrate that a suitable intervention policy may be a useful tool for driving the banking system towards a more robust structure.

  18. Quantum behaviour of pumped and damped triangular Bose-Hubbard systems

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2017-12-01

    We propose and analyse analogs of optical cavities for atoms using three-well Bose-Hubbard models with pumping and losses. We consider triangular configurations. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a quantitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, preserving good coherence between the wells in the steady-state. We find quadrature squeezing and mode entanglement for some parameter regimes and demonstrate that the trimer with pumping and damping at the same well is the stronger option for producing non-classical states. Due to recent experimental advances, it should be possible to demonstrate the effects we investigate and predict.

  19. Integrating fire behavior models and geospatial analysis for wildland fire risk assessment and fuel management planning

    Treesearch

    Alan A. Ager; Nicole M. Vaillant; Mark A. Finney

    2011-01-01

    Wildland fire risk assessment and fuel management planning on federal lands in the US are complex problems that require state-of-the-art fire behavior modeling and intensive geospatial analyses. Fuel management is a particularly complicated process where the benefits and potential impacts of fuel treatments must be demonstrated in the context of land management goals...

  20. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    PubMed

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  1. Collagen Triple Helix Repeat Containing-1 (CTHRC1) Expression in Oral Squamous Cell Carcinoma (OSCC): Prognostic Value and Clinico-Pathological Implications

    PubMed Central

    Lee, Chia Ee; Vincent-Chong, Vui King; Ramanathan, Anand; Kallarakkal, Thomas George; Karen-Ng, Lee Peng; Ghani, Wan Maria Nabillah; Rahman, Zainal Ariff Abdul; Ismail, Siti Mazlipah; Abraham, Mannil Thomas; Tay, Keng Kiong; Mustafa, Wan Mahadzir Wan; Cheong, Sok Ching; Zain, Rosnah Binti

    2015-01-01

    BACKGROUND: Collagen Triple Helix Repeat Containing 1 (CTHRC1) is a protein often found to be over-expressed in various types of human cancers. However, correlation between CTHRC1 expression level with clinico-pathological characteristics and prognosis in oral cancer remains unclear. Therefore, this study aimed to determine mRNA and protein expression of CTHRC1 in oral squamous cell carcinoma (OSCC) and to evaluate the clinical and prognostic impact of CTHRC1 in OSCC. METHODS: In this study, mRNA and protein expression of CTHRC1 in OSCCs were determined by quantitative PCR and immunohistochemistry, respectively. The association between CTHRC1 and clinico-pathological parameters were evaluated by univariate and multivariate binary logistic regression analyses. Correlation between CTHRC1 protein expressions with survival were analysed using Kaplan-Meier and Cox regression models. RESULTS: Current study demonstrated CTHRC1 was significantly overexpressed at the mRNA level in OSCC. Univariate analyses indicated a high-expression of CTHRC1 that was significantly associated with advanced stage pTNM staging, tumour size ≥ 4 cm and positive lymph node metastasis (LNM). However, only positive LNM remained significant after adjusting with other confounder factors in multivariate logistic regression analyses. Kaplan-Meier survival analyses and Cox model demonstrated that patients with high-expression of CTHRC1 protein were associated with poor prognosis and is an independent prognostic factor in OSCC. CONCLUSION: This study indicated that over-expression of CTHRC1 potentially as an independent predictor for positive LNM and poor prognosis in OSCC. PMID:26664254

  2. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  3. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  4. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    PubMed

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  5. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  6. Scripting MODFLOW Model Development Using Python and FloPy.

    PubMed

    Bakker, M; Post, V; Langevin, C D; Hughes, J D; White, J T; Starn, J J; Fienen, M N

    2016-09-01

    Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy. © 2016, National Ground Water Association.

  7. Fossil biogeography: a new model to infer dispersal, extinction and sampling from palaeontological data.

    PubMed

    Silvestro, Daniele; Zizka, Alexander; Bacon, Christine D; Cascales-Miñana, Borja; Salamin, Nicolas; Antonelli, Alexandre

    2016-04-05

    Methods in historical biogeography have revolutionized our ability to infer the evolution of ancestral geographical ranges from phylogenies of extant taxa, the rates of dispersals, and biotic connectivity among areas. However, extant taxa are likely to provide limited and potentially biased information about past biogeographic processes, due to extinction, asymmetrical dispersals and variable connectivity among areas. Fossil data hold considerable information about past distribution of lineages, but suffer from largely incomplete sampling. Here we present a new dispersal-extinction-sampling (DES) model, which estimates biogeographic parameters using fossil occurrences instead of phylogenetic trees. The model estimates dispersal and extinction rates while explicitly accounting for the incompleteness of the fossil record. Rates can vary between areas and through time, thus providing the opportunity to assess complex scenarios of biogeographic evolution. We implement the DES model in a Bayesian framework and demonstrate through simulations that it can accurately infer all the relevant parameters. We demonstrate the use of our model by analysing the Cenozoic fossil record of land plants and inferring dispersal and extinction rates across Eurasia and North America. Our results show that biogeographic range evolution is not a time-homogeneous process, as assumed in most phylogenetic analyses, but varies through time and between areas. In our empirical assessment, this is shown by the striking predominance of plant dispersals from Eurasia into North America during the Eocene climatic cooling, followed by a shift in the opposite direction, and finally, a balance in biotic interchange since the middle Miocene. We conclude by discussing the potential of fossil-based analyses to test biogeographic hypotheses and improve phylogenetic methods in historical biogeography. © 2016 The Author(s).

  8. Practical Effects of Classwide Mathematics Intervention

    ERIC Educational Resources Information Center

    VanDerHeyden, Amanda M.; Codding, Robin S.

    2015-01-01

    The current article presents additional analyses of a classwide mathematics intervention, from a previously reported randomized controlled trial, to offer new information about the treatment and to demonstrate the utility of different types of effect sizes. Multilevel modeling was used to examine treatment effects by race, sex, socioeconomic…

  9. Systematic Review of Health Economic Analyses of Measles and Rubella Immunization Interventions.

    PubMed

    Thompson, Kimberly M; Odahowski, Cassie L

    2016-07-01

    Economic analyses for vaccine-preventable diseases provide important insights about the value of prevention. We reviewed the literature to identify all of the peer-reviewed, published economic analyses of interventions related to measles and rubella immunization options to assess the different types of analyses performed and characterize key insights. We searched PubMed, the Science Citation Index, and references from relevant articles for studies in English and found 67 analyses that reported primary data and quantitative estimates of benefit-cost or cost-effectiveness analyses for measles and/or rubella immunization interventions. We removed studies that we characterized as cost-minimization analyses from this sample because they generally provide insights that focused on more optimal strategies to achieve the same health outcome. The 67 analyses we included demonstrate the large economic benefits associated with preventing measles and rubella infections using vaccines and the benefit of combining measles and rubella antigens into a formulation that saves the costs associated with injecting the vaccines separately. Despite the importance of population immunity and dynamic viral transmission, most of the analyses used static models to estimate cases prevented and characterize benefits, although the use of dynamic models continues to increase. Many of the analyses focused on characterizing the most significant adverse outcomes (e.g., mortality for measles, congenital rubella syndrome for rubella) and/or only direct costs, and the most complete analyses present data from high-income countries. © 2014 Society for Risk Analysis.

  10. Influence of yield surface curvature on the macroscopic yielding and ductile failure of isotropic porous plastic materials

    NASA Astrophysics Data System (ADS)

    Dæhli, Lars Edvard Bryhni; Morin, David; Børvik, Tore; Hopperstad, Odd Sture

    2017-10-01

    Numerical unit cell models of an approximative representative volume element for a porous ductile solid are utilized to investigate differences in the mechanical response between a quadratic and a non-quadratic matrix yield surface. A Hershey equivalent stress measure with two distinct values of the yield surface exponent is employed as the matrix description. Results from the unit cell calculations are further used to calibrate a heuristic extension of the Gurson model which incorporates effects of the third deviatoric stress invariant. An assessment of the porous plasticity model reveals its ability to describe the unit cell response to some extent, however underestimating the effect of the Lode parameter for the lower triaxiality ratios imposed in this study when compared to unit cell simulations. Ductile failure predictions by means of finite element simulations using a unit cell model that resembles an imperfection band are then conducted to examine how the non-quadratic matrix yield surface influences the failure strain as compared to the quadratic matrix yield surface. Further, strain localization predictions based on bifurcation analyses and imperfection band analyses are undertaken using the calibrated porous plasticity model. These simulations are then compared to the unit cell calculations in order to elucidate the differences between the various modelling strategies. The current study reveals that strain localization analyses using an imperfection band model and a spatially discretized unit cell are in reasonable agreement, while the bifurcation analyses predict higher strain levels at localization. Imperfection band analyses are finally used to calculate failure loci for the quadratic and the non-quadratic matrix yield surface under a wide range of loading conditions. The underlying matrix yield surface is demonstrated to have a pronounced influence on the onset of strain localization.

  11. Multidisciplinary Approach to Aerospike Nozzle Design

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional finite-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against separate aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single-discipline design strategy.

  12. Supporting BPMN choreography with system integration artefacts for enterprise process collaboration

    NASA Astrophysics Data System (ADS)

    Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2014-07-01

    Business Process Model and Notation (BPMN) choreography modelling depicts externally visible message exchanges between collaborating processes of enterprise information systems. Implementation of choreography relies on designing system integration solutions to realise message exchanges between independently developed systems. Enterprise integration patterns (EIPs) are widely accepted artefacts to design integration solutions. If the choreography model represents coordination requirements between processes with behaviour mismatches, the integration designer needs to analyse the routing requirements and address these requirements by manually designing EIP message routers. As collaboration scales and complexity increases, manual design becomes inefficient. Thus, the research problem of this paper is to explore a method to automatically identify routing requirements from BPMN choreography model and to accordingly design routing in the integration solution. To achieve this goal, recurring behaviour mismatch scenarios are analysed as patterns, and corresponding solutions are proposed as EIP routers. Using this method, a choreography model can be analysed by computer to identify occurrences of mismatch patterns, leading to corresponding router selection. A case study demonstrates that the proposed method enables computer-assisted integration design to implement choreography. A further experiment reveals that the method is effective to improve the design quality and reduce time cost.

  13. Wide-Field Imaging System and Rapid Direction of Optical Zoom (WOZ)

    DTIC Science & Technology

    2011-03-25

    COMSOL Multiphysics, and ZEMAX optical design. The multiphysics design tool is nearing completion. We have demonstrated the ability to create a model in...and mechanical modeling to calculate the deformation resulting from the applied voltages. Finally, the deformed surface can be exported to ZEMAX via...MatLab. From ZEMAX , various analyses can be conducted to determine important parameters such as focal point, aberrations, and wavefront distortion

  14. Modeling time-series count data: the unique challenges facing political communication studies.

    PubMed

    Fogarty, Brian J; Monogan, James E

    2014-05-01

    This paper demonstrates the importance of proper model specification when analyzing time-series count data in political communication studies. It is common for scholars of media and politics to investigate counts of coverage of an issue as it evolves over time. Many scholars rightly consider the issues of time dependence and dynamic causality to be the most important when crafting a model. However, to ignore the count features of the outcome variable overlooks an important feature of the data. This is particularly the case when modeling data with a low number of counts. In this paper, we argue that the Poisson autoregressive model (Brandt and Williams, 2001) accurately meets the needs of many media studies. We replicate the analyses of Flemming et al. (1997), Peake and Eshbaugh-Soha (2008), and Ura (2009) and demonstrate that models missing some of the assumptions of the Poisson autoregressive model often yield invalid inferences. We also demonstrate that the effect of any of these models can be illustrated dynamically with estimates of uncertainty through a simulation procedure. The paper concludes with implications of these findings for the practical researcher. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Integration of Engine, Plume, and CFD Analyses in Conceptual Design of Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Li, Wu; Campbell, Richard; Geiselhart, Karl; Shields, Elwood; Nayani, Sudheer; Shenoy, Rajiv

    2009-01-01

    This paper documents an integration of engine, plume, and computational fluid dynamics (CFD) analyses in the conceptual design of low-boom supersonic aircraft, using a variable fidelity approach. In particular, the Numerical Propulsion Simulation System (NPSS) is used for propulsion system cycle analysis and nacelle outer mold line definition, and a low-fidelity plume model is developed for plume shape prediction based on NPSS engine data and nacelle geometry. This model provides a capability for the conceptual design of low-boom supersonic aircraft that accounts for plume effects. Then a newly developed process for automated CFD analysis is presented for CFD-based plume and boom analyses of the conceptual geometry. Five test cases are used to demonstrate the integrated engine, plume, and CFD analysis process based on a variable fidelity approach, as well as the feasibility of the automated CFD plume and boom analysis capability.

  16. Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling

    PubMed Central

    Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.

    2010-01-01

    Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316

  17. An approach to investigating linkage for bipolar disorder using large Costa Rican pedigrees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freimer, N.B.; Reus, V.I.; Vinogradov, S.

    1996-05-31

    Despite the evidence that major gene effects exist for bipolar disorder (BP), efforts to map BP loci have so far been unsuccessful. A strategy for mapping BP loci is described, focused on investigation of large pedigrees from a genetically homogenous population, that of Costa Rica. This approach is based on the use of a conservative definition of the BP phenotype in preparation for whole genome screening with polymorphic markers. Linkage simulation analyses are utilized to indicate the probability of detecting evidence suggestive of linkage, using these pedigrees. These analyses are performed under a series of single locus models, ranging formmore » recessive to nearly dominant, utilizing both lod score and affected pedigree member analyses. Additional calculations demonstrate that with any of the models employed, most of the information for linkage derives from affected rather than unaffected individuals. 26 refs., 2 figs., 5 tabs.« less

  18. Seeking heavy Higgs bosons through cascade decays

    NASA Astrophysics Data System (ADS)

    Coleppa, Baradhwaj; Fuks, Benjamin; Poulose, P.; Sahoo, Shibananda

    2018-04-01

    We investigate the LHC discovery prospects for a heavy Higgs boson decaying into the standard model Higgs boson and additional weak bosons. We consider a generic model-independent new physics configuration where this decay proceeds via a cascade involving other intermediate scalar bosons and focus on an LHC final-state signature comprised either of four b -jets and two charged leptons or of four charged leptons and two b -jets. We design two analyses of the corresponding signals, and demonstrate that a 5 σ discovery at the 14 TeV LHC is possible for various combinations of the parent and daughter Higgs-boson masses. We moreover find that the standard model backgrounds can be sufficiently rejected to guarantee the reconstruction of the parent Higgs boson mass. We apply our analyses to the Type-II two-Higgs-doublet model and identify the regions of the parameter space to which the LHC is sensitive.

  19. Recent Progress on Labfit: a Multispectrum Analysis Program for Fitting Lineshapes Including the Htp Model and Temperature Dependence

    NASA Astrophysics Data System (ADS)

    Cich, Matthew J.; Guillaume, Alexandre; Drouin, Brian; Benner, D. Chris

    2017-06-01

    Multispectrum analysis can be a challenge for a variety of reasons. It can be computationally intensive to fit a proper line shape model especially for high resolution experimental data. Band-wide analyses including many transitions along with interactions, across many pressures and temperatures are essential to accurately model, for example, atmospherically relevant systems. Labfit is a fast multispectrum analysis program originally developed by D. Chris Benner with a text-based interface. More recently at JPL a graphical user interface was developed with the goal of increasing the ease of use but also the number of potential users. The HTP lineshape model has been added to Labfit keeping it up-to-date with community standards. Recent analyses using labfit will be shown to demonstrate its ability to competently handle large experimental datasets, including high order lineshape effects, that are otherwise unmanageable.

  20. Representing distributed cognition in complex systems: how a submarine returns to periscope depth.

    PubMed

    Stanton, Neville A

    2014-01-01

    This paper presents the Event Analysis of Systemic Teamwork (EAST) method as a means of modelling distributed cognition in systems. The method comprises three network models (i.e. task, social and information) and their combination. This method was applied to the interactions between the sound room and control room in a submarine, following the activities of returning the submarine to periscope depth. This paper demonstrates three main developments in EAST. First, building the network models directly, without reference to the intervening methods. Second, the application of analysis metrics to all three networks. Third, the combination of the aforementioned networks in different ways to gain a broader understanding of the distributed cognition. Analyses have shown that EAST can be used to gain both qualitative and quantitative insights into distributed cognition. Future research should focus on the analyses of network resilience and modelling alternative versions of a system.

  1. The teamwork in assertive community treatment (TACT) scale: development and validation.

    PubMed

    Wholey, Douglas R; Zhu, Xi; Knoke, David; Shah, Pri; Zellmer-Bruhn, Mary; Witheridge, Thomas F

    2012-11-01

    Team design is meticulously specified for assertive community treatment (ACT) teams, yet performance can vary across ACT teams, even those with high fidelity. By developing and validating the Teamwork in Assertive Community Treatment (TACT) scale, investigators examined the role of team processes in ACT performance. The TACT scale measuring ACT teamwork was developed from a conceptual model grounded in organizational research and adapted for the ACT and mental health context. TACT subscales were constructed after exploratory and confirmatory factor analyses. The reliability, discriminant validity, predictive validity, temporal stability, internal consistency, and within-team agreement were established with surveys from approximately 300 members of 26 Minnesota ACT teams who completed the questionnaire three times, at six-month intervals. Nine TACT subscales emerged from the analyses: exploration, exploitation of new and existing knowledge, psychological safety, goal agreement, conflict, constructive controversy, information accessibility, encounter preparedness, and consumer-centered care. These nine subscales demonstrated fit and temporal stability (confirmatory factor analysis), high internal consistency (Cronbach's alpha), and within-team agreement and between-team differences (rwg and intraclass correlations). Correlational analyses of the subscales revealed that they measure related yet distinctive aspects of ACT team processes, and regression analyses demonstrated predictive validity (encounter preparedness is related to staff outcomes). The TACT scale demonstrated high reliability and validity and can be included in research and evaluation of teamwork in ACT and mental health teams.

  2. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  3. Characterizing Discourse Deficits Following Penetrating Head Injury: A Preliminary Model

    ERIC Educational Resources Information Center

    Coelho, Carl; Le, Karen; Mozeiko, Jennifer; Hamilton, Mark; Tyler, Elizabeth; Krueger, Frank; Grafman, Jordan

    2013-01-01

    Purpose: Discourse analyses have demonstrated utility for delineating subtle communication deficits following closed head injuries (CHIs). The present investigation examined the discourse performance of a large group of individuals with penetrating head injury (PHI). Performance was also compared across 6 subgroups of PHI based on lesion locale. A…

  4. Systems definition study for shuttle demonstration flights of large space structures. Volume 3: Thermal analyses

    NASA Technical Reports Server (NTRS)

    1979-01-01

    the development of large space structure technology is discussed. A detailed thermal analysis of a model space fabricated 1 meter beam is presented. Alternative thermal coatings are evaluated, and deflections, stresses, and stiffness variations resulting from flight orientations and solar conditions are predicted.

  5. Attenuation Factors for B(E2) in the Microscopic Description of Multiphonon States ---A Simple Model Analysis---

    NASA Astrophysics Data System (ADS)

    Matsuyanagi, K.

    1982-05-01

    With an exactly solvable O(4) model of Piepenbring, Silvestre-Brac and Szymanski, we demonstrate that the attenuation factor for the B(E2) values, derived by the lowest-order approximation of the multiphonon method, takes excellent care of the kinematical anharmonicity effects, if multiphonon states are defined in the intrinsic subspace orthogonal to the pairing rotation. It is also shown that the other attenuation effect characterizing the interacting boson model is not a dominant effect in the model analysed here.

  6. A Microsoft-Excel-based tool for running and critically appraising network meta-analyses--an overview and application of NetMetaXL.

    PubMed

    Brown, Stephen; Hutton, Brian; Clifford, Tammy; Coyle, Doug; Grima, Daniel; Wells, George; Cameron, Chris

    2014-09-29

    The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL's interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent critical appraisal of network meta-analyses, enhanced standardization of reporting, and integration with health economic evaluations which are frequently Excel-based.

  7. Analyzing Longitudinal Data with Multilevel Models: An Example with Individuals Living with Lower Extremity Intra-articular Fractures

    PubMed Central

    Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun

    2008-01-01

    The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151

  8. Analysis of SMA Hybrid Composite Structures in MSC.Nastran and ABAQUS

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2005-01-01

    A thermoelastic constitutive model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures was recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilever beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilever beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  9. Analysis of SMA Hybrid Composite Structures using Commercial Codes

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2004-01-01

    A thermomechanical model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures has been recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilevered beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilevered beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  10. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  11. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  12. Climate change and the effects of dengue upon Australia: An analysis of health impacts and costs

    NASA Astrophysics Data System (ADS)

    Newth, D.; Gunasekera, D.

    2010-08-01

    Projected regional warming and climate change analysis and health impact studies suggest that Australia is potentially vulnerable to increased occurrence of vector borne diseases such as dengue fever. Expansion of the dengue fever host, Aedes aegypti could potentially pose a significant public health risk. To manage such health risks, there is a growing need to focus on adaptive risk management strategies. In this paper, we combine analyses from climate, biophysical and economic models with a high resolution population model for disease spread, the EpiCast model to analyse the health impacts and costs of spread of dengue fever. We demonstrate the applicability of EpiCast as a decision support tool to evaluate mitigation strategies to manage the public health risks associated with shifts in the distribution of dengue fever in Australia.

  13. Evaluating child welfare policies with decision-analytic simulation models.

    PubMed

    Goldhaber-Fiebert, Jeremy D; Bailey, Stephanie L; Hurlburt, Michael S; Zhang, Jinjin; Snowden, Lonnie R; Wulczyn, Fred; Landsverk, John; Horwitz, Sarah M

    2012-11-01

    The objective was to demonstrate decision-analytic modeling in support of Child Welfare policymakers considering implementing evidence-based interventions. Outcomes included permanency (e.g., adoptions) and stability (e.g., foster placement changes). Analyses of a randomized trial of KEEP-a foster parenting intervention-and NSCAW-1 estimated placement change rates and KEEP's effects. A microsimulation model generalized these findings to other Child Welfare systems. The model projected that KEEP could increase permanency and stability, identifying strategies targeting higher-risk children and geographical regions that achieve benefits efficiently. Decision-analytic models enable planners to gauge the value of potential implementations.

  14. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  15. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  16. SEM Based CARMA Time Series Modeling for Arbitrary N.

    PubMed

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  17. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  18. Error quantification of a high-resolution coupled hydrodynamic-ecosystem coastal-ocean model: Part 2. Chlorophyll-a, nutrients and SPM

    NASA Astrophysics Data System (ADS)

    Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger

    2007-12-01

    Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.

  19. Life assessment of structural components using inelastic finite element analyses

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Halford, Gary R.

    1993-01-01

    The need for enhanced and improved performance of structural components subject to severe cyclic thermal/mechanical loadings, such as in the aerospace industry, requires development of appropriate solution technologies involving time-dependent inelastic analyses. Such analyses are mandatory to predict local stress-strain response and to assess more accurately the cyclic life time of structural components. The NASA-Lewis Research Center is cognizant of this need. As a result of concerted efforts at Lewis during the last few years, several such finite element solution technologies (in conjunction with the finite element program MARC) were developed and successfully applied to numerous uniaxial and multiaxial problems. These solution technologies, although developed for use with MARC program, are general in nature and can easily be extended for adaptation with other finite element programs such as ABAQUS, ANSYS, etc. The description and results obtained from two such inelastic finite element solution technologies are presented. The first employs a classical (non-unified) creep-plasticity model. An application of this technology is presented for a hypersonic inlet cowl-lip problem. The second of these technologies uses a unified creep-plasticity model put forth by Freed. The structural component for which this finite element solution technology is illustrated, is a cylindrical rocket engine thrust chamber. The advantages of employing a viscoplastic model for nonlinear time-dependent structural analyses are demonstrated. The life analyses for cowl-lip and cylindrical thrust chambers are presented. These analyses are conducted by using the stress-strain response of these components obtained from the corresponding finite element analyses.

  20. Comparison between measured turbine stage performance and the predicted performance using quasi-3D flow and boundary layer analyses

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.; Haas, J. E.; Katsanis, T.

    1984-01-01

    A method for calculating turbine stage performance is described. The usefulness of the method is demonstrated by comparing measured and predicted efficiencies for nine different stages. Comparisons are made over a range of turbine pressure ratios and rotor speeds. A quasi-3D flow analysis is used to account for complex passage geometries. Boundary layer analyses are done to account for losses due to friction. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses.

  1. Inhomogeneous Forcing and Transient Climate Sensitivity

    NASA Technical Reports Server (NTRS)

    Shindell, Drew T.

    2014-01-01

    Understanding climate sensitivity is critical to projecting climate change in response to a given forcing scenario. Recent analyses have suggested that transient climate sensitivity is at the low end of the present model range taking into account the reduced warming rates during the past 10-15 years during which forcing has increased markedly. In contrast, comparisons of modelled feedback processes with observations indicate that the most realistic models have higher sensitivities. Here I analyse results from recent climate modelling intercomparison projects to demonstrate that transient climate sensitivity to historical aerosols and ozone is substantially greater than the transient climate sensitivity to CO2. This enhanced sensitivity is primarily caused by more of the forcing being located at Northern Hemisphere middle to high latitudes where it triggers more rapid land responses and stronger feedbacks. I find that accounting for this enhancement largely reconciles the two sets of results, and I conclude that the lowest end of the range of transient climate response to CO2 in present models and assessments (less than 1.3 C) is very unlikely.

  2. Using Runtime Analysis to Guide Model Checking of Java Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.

  3. Partner aggression and problem drinking across the lifespan: how much do they decline?

    PubMed

    O'Leary, K Daniel; Woodin, Erica M

    2005-11-01

    Cross-sectional analyses from nationally-representative samples demonstrate significant age-related trends in partner aggression and problem drinking. Both behaviors are most prevalent in the early to mid-twenties and increasingly less common thereafter. Aggregate associations based on percentage of individuals displaying the behavior in each age range are dramatically stronger than those found when correlating individuals' ages and behavior. Multilevel modeling demonstrates that group-level effects do not mask associations found at the level of the individual for either problem drinking or partner aggression. An analysis of recent abstracts from psychology journals showed that issues of aggregate and individual data are rarely if ever discussed, and even well-known statistics books in psychology rarely discuss such issues. The interpretation of aggregate data will become increasing important as psychologists themselves, and in collaboration with epidemiologists and sociologists, have access to large data sets that allow for data aggregation. Both aggregate and individual analyses are valid, although they provide answers to different questions. Individual analyses are necessary for predicting individual behavior; aggregate analyses are useful in policy planning for large scale prevention and intervention. Strengths and limitations of cross-sectional community samples and aggregate data are also discussed.

  4. Social goals, social behavior, and social status in middle childhood.

    PubMed

    Rodkin, Philip C; Ryan, Allison M; Jamison, Rhonda; Wilson, Travis

    2013-06-01

    This study examines motivational precursors of social status and the applicability of a dual-component model of social competence to middle childhood. Concurrent and longitudinal relationships between self-reported social goals (social development, demonstration-approach, demonstration-avoid goal orientations), teacher-rated prosocial and aggressive behavior, and peer nominations of social status (preference, popularity) were examined over the course of an academic year among 980 3rd- to 5th-grade children. Findings support dual-component expectations. Confirmatory factor analyses verified the expected 3-factor structure of social goals and 2-factor structure of social status. Structural equation modeling (SEM) found that (a) social development goals were associated with prosocial behavior and increased preference, and (b) demonstration-approach goals were associated with aggressive behavior and increased popularity. Demonstration-avoid goals were associated with a popularity decrease. SEMs were invariant across grade, gender, and ethnicity. Discussion concerns the potential risks of high social status, extensions to the dual-component model, and the generality of an achievement goal approach to child social development. PsycINFO Database Record (c) 2013 APA, all rights reserved

  5. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  6. Recognition errors suggest fast familiarity and slow recollection in rhesus monkeys

    PubMed Central

    Basile, Benjamin M.; Hampton, Robert R.

    2013-01-01

    One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses can demonstrate dual processes has been repeatedly challenged. Here, we present independent converging evidence for the dual-process model from analyses of recognition errors made by rhesus monkeys. Recognition choices were made in three different ways depending on processing duration. Short-latency errors were disproportionately false alarms to familiar lures, suggesting control by familiarity. Medium-latency responses were less likely to be false alarms and were more accurate, suggesting onset of a recollective process that could correctly reject familiar lures. Long-latency responses were guesses. A response deadline increased false alarms, suggesting that limiting processing time weakened the contribution of recollection and strengthened the contribution of familiarity. Together, these findings suggest fast familiarity and slow recollection in monkeys, that monkeys use a “recollect to reject” strategy to countermand false familiarity, and that primate recognition performance is well-characterized by a dual-process model consisting of recollection and familiarity. PMID:23864646

  7. Uncertainty quantification and experimental design based on unsupervised machine learning identification of contaminant sources and groundwater types using hydrogeochemical data

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.

    2017-12-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.

  8. Partner's influences and other correlates of prenatal alcohol use.

    PubMed

    van der Wulp, Nickie Y; Hoving, Ciska; de Vries, Hein

    2015-04-01

    To investigate the influence of partners on alcohol consumption in pregnant women within the context of other factors. A Dutch nationwide online cross-sectional study among 158 pregnant women and their partners was conducted. To identify correlates of prenatal alcohol use, including perceived and reported partner norm (i.e. partner's belief regarding acceptability of prenatal alcohol use), partner modeling (i.e. partner's alcohol use during the woman's pregnancy) and partner support (i.e. partner's help in abstaining from alcohol during pregnancy), independent sample T-tests and Chi square tests were conducted. Correlation analyses tested the relationship between perceived and reported partner influence. Multivariate logistic hierarchical regression analyses tested the independent impact of partner's perceived and reported influence next to other correlates from the I-Change Model. Pregnant women who consumed alcohol perceived a weaker partner norm (p < 0.001) and less partner modeling (p < 0.05), with the partner reporting a weaker norm (p < 0.001), more drinking days per week (p < 0.05) and weaker support (p < 0.05). Perceived and reported partner norm, modeling and support were positively related (respectively p < 0.01, p < 0.01 and p < 0.05). The multivariate analyses demonstrated that pregnant women with a higher education who perceived lower severity of harm due to prenatal alcohol use and a weaker partner norm were more likely to use alcohol (R(2) = 0.42). This study demonstrated that perceived partner norm was the most critical of the constructs of perceived and reported partner influences in explaining prenatal alcohol use.

  9. Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.

    PubMed

    Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V

    2017-06-01

    Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.

  10. Teaching Positioning and Handling Techniques to Public School Personnel through Inservice Training. Brief Report.

    ERIC Educational Resources Information Center

    Inge, Katherine J.; Snell, Martha E.

    1985-01-01

    Two teachers were taught positioning and handling techniques using written task analyses, demonstrations by an occupational therapist, verbal and modeling prompts, corrective feedback, and praise. Training took place in the natural school environment, during school hours, and with students that the teachers taught. A functional relationship…

  11. The Integration of Genetic Propensities into Social-Control Models of Delinquency and Violence among Male Youths

    ERIC Educational Resources Information Center

    Guo, Guang; Roettger, Michael E.; Cai, Tianji

    2008-01-01

    This study, drawing on approximately 1,100 males from the National Longitudinal Study of Adolescent Health, demonstrates the importance of genetics, and genetic-environmental interactions, for understanding adolescent delinquency and violence. Our analyses show that three genetic polymorphisms--specifically, the 30-bp promoter-region variable…

  12. Epiphany? A Case Study of Learner-Centredness in Educational Supervision

    ERIC Educational Resources Information Center

    Talbot, Martin

    2009-01-01

    Graduate medical trainees in the UK appreciate mentors who demonstrate learner-centredness as modelled by Rogers. This case study was undertaken to examine how, in one instance, learner-centred may be supervision within the tight confines of a formal, competency-based programme of training. Four formal interviews (in 18 months), were analysed to…

  13. Identification and Expression Analyses of Poly[I:C]-stimulated Genes in Channel Catfish (Ictalurus punctatus)

    USDA-ARS?s Scientific Manuscript database

    Channel catfish (Ictalurus punctatus) have proven to be an excellent model with which to study immune responses in lower vertebrates. Identification of antiviral antibodies and cytotoxic cells, as well as both type I and II interferon (IFN), demonstrate that catfish likely mount a vigorous anti-vir...

  14. Design and analysis of DNA strand displacement devices using probabilistic model checking

    PubMed Central

    Lakin, Matthew R.; Parker, David; Cardelli, Luca; Kwiatkowska, Marta; Phillips, Andrew

    2012-01-01

    Designing correct, robust DNA devices is difficult because of the many possibilities for unwanted interference between molecules in the system. DNA strand displacement has been proposed as a design paradigm for DNA devices, and the DNA strand displacement (DSD) programming language has been developed as a means of formally programming and analysing these devices to check for unwanted interference. We demonstrate, for the first time, the use of probabilistic verification techniques to analyse the correctness, reliability and performance of DNA devices during the design phase. We use the probabilistic model checker prism, in combination with the DSD language, to design and debug DNA strand displacement components and to investigate their kinetics. We show how our techniques can be used to identify design flaws and to evaluate the merits of contrasting design decisions, even on devices comprising relatively few inputs. We then demonstrate the use of these components to construct a DNA strand displacement device for approximate majority voting. Finally, we discuss some of the challenges and possible directions for applying these methods to more complex designs. PMID:22219398

  15. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    PubMed

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  16. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  17. A Microsoft-Excel-based tool for running and critically appraising network meta-analyses—an overview and application of NetMetaXL

    PubMed Central

    2014-01-01

    Background The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. Methods We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL’s interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. Results We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. Conclusions Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent critical appraisal of network meta-analyses, enhanced standardization of reporting, and integration with health economic evaluations which are frequently Excel-based. PMID:25267416

  18. The PEcAn Project: Model-Data Ecoinformatics for the Observatory Era

    NASA Astrophysics Data System (ADS)

    Dietze, M. C.; LeBauer, D. S.; Davidson, C. D.; Desai, A. R.; Kooper, R.; McHenry, K.; Mulrooney, P.

    2011-12-01

    The fundamental questions about how terrestrial ecosystems will respond to climate change are straightforward and well known, yet a small number of important gaps separate the information we have gathered from the understanding required to inform policy and management. A critical gap is that no one data source provides a complete picture of the terrestrial biosphere, and therefore multiple data sources must be integrated in a sensible manner. Process-based models represent an ideal framework for this synthesis, but to date model-data synthesize has only made use of a subset of the available data types, and remains inaccessible to much of the scientific community, largely due to the daunting ecoinformatics challenges. The Predictive Ecosystem Analyzer (PEcAn) is an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates formal data assimilation, and enables more effective feedbacks between models and field research. PEcAn makes complex analyses transparent, repeatable, and accessible to a diverse array of researchers. PEcAn is not model specific, but rather encapsulates any ecosystem model within a set of standardized input and output modules. Herein we demonstrate PEcAn's ability to automate many of the tasks involved in modeling by gathering and processing a diverse arrays of data sets, initiating ensembles of model runs, visualizing output, and comparing models to observations. PEcAn employs a fully Bayesian approach to model parameterization and the estimation of ecosystem pools and fluxes that allows a straightforward propagation of uncertainties into analyses and forecasts. This approach also makes possible the synthesis of a diverse array of data types operating at different spatial and temporal scales and to easily update predictions as new information becomes available. We also demonstrate PEcAn's ability to iteratively synthesize information for literature trait databases, ground observations, eddy-covariance towers and quantify the reductions in overall uncertainty as each new dataset is added. PEcAn also automates a number of model analyses, such as sensitivity analyses, ensemble prediction, and variance decomposition which collectively allow the system to partition and ascribe uncertainties to different model parameters and processes. PEcAn provides a direct feedback to field research by further automating the estimation of sample sizes and sampling distributions required to reduce model uncertainties, enabling further measurements to be targeted and optimized. Finally, we will present the PEcAn development plan and timeline, including new features such as the synthesis of remotely sensed data, regional-scale data assimilation, and real-time forecasting. Ultimately, PEcAn aims to make ecosystem modeling and data assimilation routine tools for answering scientific questions and informing policy and management.

  19. Associations between Grawe's general mechanisms of change and Young's early maladaptive schemas in psychotherapy research: a comparative study of change processes.

    PubMed

    Mander, Johannes V; Jacob, Gitta A; Götz, Lea; Sammet, Isa; Zipfel, Stephan; Teufel, Martin

    2015-01-01

    The study aimed at analyzing associations between Grawe's general mechanisms of change and Young's early maladaptive schemas (EMS). Therefore, 98 patients completed the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP), the Young Shema Questionnaire-Short Form Revised (YSQ S3R), and diverse outcome measures at the beginning and end of treatment. Our results are important for clinical applications, as we demonstrated strong predictive effects of change mechanisms on schema domains using regression analyses and cross-lagged panel models. Resource activation experiences seem to be especially crucial in fostering alterations in EMS, as this change mechanism demonstrated significant associations with several schema domains. Future research should investigate these aspects in more detail using observer-based micro-process analyses.

  20. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    PubMed Central

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  1. On the structure of personality disorder traits: conjoint analyses of the CAT-PD, PID-5, and NEO-PI-3 trait models.

    PubMed

    Wright, Aidan G C; Simms, Leonard J

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the Diagnostic and Statistical Manual of Mental Disorders, fifth edition and NEO Personality Inventory-3 First Half, and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (n = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms.

  2. Integrated Modeling, Mapping, and Simulation (IMMS) framework for planning exercises.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest J.; Plantenga, Todd D.

    2010-06-01

    The Integrated Modeling, Mapping, and Simulation (IMMS) program is designing and prototyping a simulation and collaboration environment for linking together existing and future modeling and simulation tools to enable analysts, emergency planners, and incident managers to more effectively, economically, and rapidly prepare, analyze, train, and respond to real or potential incidents. When complete, the IMMS program will demonstrate an integrated modeling and simulation capability that supports emergency managers and responders with (1) conducting 'what-if' analyses and exercises to address preparedness, analysis, training, operations, and lessons learned, and (2) effectively, economically, and rapidly verifying response tactics, plans and procedures.

  3. The Impact of Spring Subsurface Soil Temperature Anomaly in the Western U.S. on North American Summer Precipitation: A Case Study Using Regional Climate Model Downscaling

    DTIC Science & Technology

    2012-06-02

    regional climate model downscaling , J. Geophys. Res., 117, D11103, doi:10.1029/2012JD017692. 1. Introduction [2] Modeling studies and data analyses...based on ground and satellite data have demonstrated that the land surface state variables, such as soil moisture, snow, vegetation, and soil temperature... downscaling rather than simply applying reanal- ysis data as LBC for both Eta control and sensitivity experiments as done in many RCM sensitivity studies

  4. Modelling the transient behaviour of pulsed current tungsten-inert-gas weldpools

    NASA Astrophysics Data System (ADS)

    Wu, C. S.; Zheng, W.; Wu, L.

    1999-01-01

    A three-dimensional model is established to simulate the pulsed current tungsten-inert-gas (TIG) welding process. The goal is to analyse the cyclic variation of fluid flow and heat transfer in weldpools under periodic arc heat input. To this end, an algorithm, which is capable of handling the transience, nonlinearity, multiphase and strong coupling encountered in this work, is developed. The numerical simulations demonstrate the transient behaviour of weldpools under pulsed current. Experimental data are compared with numerical results to show the effectiveness of the developed model.

  5. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  6. Fleet DNA Phase 1 Refinement & Phase 2 Implementation; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Kenneth; Duran, Adam

    2015-06-11

    Fleet DNA acts as a secure data warehouse for medium- and heavy-duty vehicle data. It demonstrates that vehicle drive cycle data can be collected and stored for large-scale analysis and modeling applications. The data serve as a real-world data source for model development and validation. Storage of the results of past/present/future data collection efforts improves analysis efficiency through pooling of shared data and provides the opportunity for 'big data' type analyses. Fleet DNA shows it is possible to develop a common database structure that can store/analyze/report on data sourced from multiple parties, each with unique data formats/types. Data filtration andmore » normalization algorithms developed for the project allow for a wide range of data types and inputs, expanding the project’s potential. Fleet DNA demonstrates the power of integrating Big Data with existing and future tools and analyses: it provides an enhanced understanding and education of users, users can explore greenhouse gases and economic opportunities via AFLEET and ADOPT modeling, drive cycles can be characterized and visualized using DRIVE, high-level vehicle modeling can be performed using real-world drive cycles via FASTSim, and data reporting through Fleet DNA Phase 1 and 2 websites provides external users access to analysis results and gives the opportunity to explore on their own.« less

  7. Mathematical model of the metabolism of 123I-16-iodo-9-hexadecenoic acid in an isolated rat heart. Validation by comparison with experimental measurements.

    PubMed

    Dubois, F; Depresseux, J C; Bontemps, L; Demaison, L; Keriel, C; Mathieu, J P; Pernin, C; Marti-Batlle, D; Vidal, M; Cuchet, P

    1986-01-01

    The aim of the present study was to demonstrate that it is possible to estimate the intracellular metabolism of a fatty acid labelled with iodine using external radioactivity measurements. 123I-16-iodo-9-hexadecenoic acid (IHA) was injected close to the coronary arteries of isolated rat hearts perfused according to the Langendorff technique. The time course of the cardiac radioactivity was measured using an INa crystal coupled to an analyser. The obtained curves were analysed using a four-compartment mathematical model, with the compartments corresponding to the vascular-IHA (O), intramyocardial free-IHA (1), esterified-IHA (2) and iodide (3) pools. Curve analysis using this model demonstrated that, as compared to substrate-free perfusion, the presence of glucose (11 mM) increased IHA storage and decreased its oxidation. These changes were enhanced by the presence of insulin. A comparison of these results with measurements of the radioactivity levels within the various cellular fractions validated our proposed mathematical model. Thus, using only a mathematical analysis of a cardiac time-activity curve, it is possible to obtain quantitative information about IHA distribution in the different intracellular metabolic pathways. This technique is potentially useful for the study of metabolic effects of ischaemia or anoxia, as well as for the study of the influence of various substrates or drugs on IHA metabolism in isolated rat hearts.

  8. A Model-Based Cluster Analysis of Maternal Emotion Regulation and Relations to Parenting Behavior.

    PubMed

    Shaffer, Anne; Whitehead, Monica; Davis, Molly; Morelen, Diana; Suveg, Cynthia

    2017-10-15

    In a diverse community sample of mothers (N = 108) and their preschool-aged children (M age  = 3.50 years), this study conducted person-oriented analyses of maternal emotion regulation (ER) based on a multimethod assessment incorporating physiological, observational, and self-report indicators. A model-based cluster analysis was applied to five indicators of maternal ER: maternal self-report, observed negative affect in a parent-child interaction, baseline respiratory sinus arrhythmia (RSA), and RSA suppression across two laboratory tasks. Model-based cluster analyses revealed four maternal ER profiles, including a group of mothers with average ER functioning, characterized by socioeconomic advantage and more positive parenting behavior. A dysregulated cluster demonstrated the greatest challenges with parenting and dyadic interactions. Two clusters of intermediate dysregulation were also identified. Implications for assessment and applications to parenting interventions are discussed. © 2017 Family Process Institute.

  9. Analysing Buyers' and Sellers' Strategic Interactions in Marketplaces: An Evolutionary Game Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Vytelingum, Perukrishnen; Cliff, Dave; Jennings, Nicholas R.

    We develop a new model to analyse the strategic behaviour of buyers and sellers in market mechanisms. In particular, we wish to understand how the different strategies they adopt affect their economic efficiency in the market and to understand the impact of these choices on the overall efficiency of the marketplace. To this end, we adopt a two-population evolutionary game theoretic approach, where we consider how the behaviours of both buyers and sellers evolve in marketplaces. In so doing, we address the shortcomings of the previous state-of-the-art analytical model that assumes that buyers and sellers have to adopt the same mixed strategy in the market. Finally, we apply our model in one of the most common market mechanisms, the Continuous Double Auction, and demonstrate how it allows us to provide new insights into the strategic interactions of such trading agents.

  10. Thermal Analysis of Small Re-Entry Probe

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Prabhu, Dinesh K.; Chen, Y. K.

    2012-01-01

    The Small Probe Reentry Investigation for TPS Engineering (SPRITE) concept was developed at NASA Ames Research Center to facilitate arc-jet testing of a fully instrumented prototype probe at flight scale. Besides demonstrating the feasibility of testing a flight-scale model and the capability of an on-board data acquisition system, another objective for this project was to investigate the capability of simulation tools to predict thermal environments of the probe/test article and its interior. This paper focuses on finite-element thermal analyses of the SPRITE probe during the arcjet tests. Several iterations were performed during the early design phase to provide critical design parameters and guidelines for testing. The thermal effects of ablation and pyrolysis were incorporated into the final higher-fidelity modeling approach by coupling the finite-element analyses with a two-dimensional thermal protection materials response code. Model predictions show good agreement with thermocouple data obtained during the arcjet test.

  11. Establishment of Patient-Derived Tumor Xenograft Models of Epithelial Ovarian Cancer for Preclinical Evaluation of Novel Therapeutics.

    PubMed

    Liu, Joyce F; Palakurthi, Sangeetha; Zeng, Qing; Zhou, Shan; Ivanova, Elena; Huang, Wei; Zervantonakis, Ioannis K; Selfors, Laura M; Shen, Yiping; Pritchard, Colin C; Zheng, Mei; Adleff, Vilmos; Papp, Eniko; Piao, Huiying; Novak, Marian; Fotheringham, Susan; Wulf, Gerburg M; English, Jessie; Kirschmeier, Paul T; Velculescu, Victor E; Paweletz, Cloud; Mills, Gordon B; Livingston, David M; Brugge, Joan S; Matulonis, Ursula A; Drapkin, Ronny

    2017-03-01

    Purpose: Ovarian cancer is the leading cause of death from gynecologic malignancy in the United States, with high rates of recurrence and eventual resistance to cytotoxic chemotherapy. Model systems that allow for accurate and reproducible target discovery and validation are needed to support further drug development in this disease. Experimental Design: Clinically annotated patient-derived xenograft (PDX) models were generated from tumor cells isolated from the ascites or pleural fluid of patients undergoing clinical procedures. Models were characterized by IHC and by molecular analyses. Each PDX was luciferized to allow for reproducible in vivo assessment of intraperitoneal tumor burden by bioluminescence imaging (BLI). Plasma assays for CA125 and human LINE-1 were developed as secondary tests of in vivo disease burden. Results: Fourteen clinically annotated and molecularly characterized luciferized ovarian PDX models were generated. Luciferized PDX models retain fidelity to both the nonluciferized PDX and the original patient tumor, as demonstrated by IHC, array CGH, and targeted and whole-exome sequencing analyses. Models demonstrated diversity in specific genetic alterations and activation of PI3K signaling pathway members. Response of luciferized PDX models to standard-of-care therapy could be reproducibly monitored by BLI or plasma markers. Conclusions: We describe the establishment of a collection of 14 clinically annotated and molecularly characterized luciferized ovarian PDX models in which orthotopic tumor burden in the intraperitoneal space can be followed by standard and reproducible methods. This collection is well suited as a platform for proof-of-concept efficacy and biomarker studies and for validation of novel therapeutic strategies in ovarian cancer. Clin Cancer Res; 23(5); 1263-73. ©2016 AACR . ©2016 American Association for Cancer Research.

  12. Combined analysis of field and model data: A case study of the phosphate dynamics in the German Bight in summer 1994

    NASA Astrophysics Data System (ADS)

    Pohlmann, Th.; Raabe, Th.; Doerffer, R.; Beddig, S.; Brockmann, U.; Dick, S.; Engel, M.; Hesse, K.-J.; König, P.; Mayer, B.; Moll, A.; Murphy, D.; Puls, W.; Rick, H.-J.; Schmidt-Nia, R.; Schönfeld, W.; Sündermann, J.

    1999-09-01

    The intention of this paper is to analyse a specific phenomenon observed during the KUSTOS campaigns in order to demonstrate the general capability of the KUSTOS and TRANSWATT approach, i.e. the combination of field and modelling activities in an interdisciplinary framework. The selected phenomenon is the increase in phosphate concentrations off the peninsula of Eiderstedt on the North Frisian coast sampled during four subsequent station grids of the KUSTOS summer campaign in 1994. First of all, a characterisation of the observed summer situation is given. The phosphate increase is described in detail in relation to the dynamics of other nutrients. In a second step, a first-order estimate of the dispersion of phosphate is discussed. The estimate is based on the box model approach and will focus on the effects of the river Elbe and Wadden Sea inputs on phosphate dynamics. Thirdly, a fully three-dimensional model system is presented, which was implemented in order to analyse the phosphate development. The model system is discussed briefly, with emphasis on phosphorus-related processes. The reliability of one of the model components, i.e. the hydrodynamical model, is demonstrated by means of a comparison of model results with observed current data. Thereafter, results of the German Bight seston model are employed to interpret the observed phosphate increase. From this combined analysis, it was possible to conclude that the phosphate increase during the first three surveys was due to internal transformation processes within the phosphorus cycle. On the other hand, the higher phosphate concentrations measured in the last station grid survey were caused by a horizontal transport of phosphate being remobilised in the Wadden Sea.

  13. LOD score exclusion analyses for candidate genes using random population samples.

    PubMed

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  14. Application of PBPK Modeling and Virtual Clinical Study Approaches to Predict the Outcomes of CYP2D6 Genotype-Guided Dosing of Tamoxifen.

    PubMed

    Nakamura, Toshimichi; Toshimoto, Kota; Lee, Wooin; Imamura, Chiyo K; Tanigawara, Yusuke; Sugiyama, Yuichi

    2018-06-19

    The Tamoxifen Response by CYP2D6 Genotype-based Treatment-1 (TARGET-1) study (n = 180) was conducted from 2012-2017 in Japan to determine the efficacy of tamoxifen dosing guided by cytochrome P450 2D6 (CYP2D6) genotypes. To predict its outcomes prior to completion, we constructed the comprehensive physiologically based pharmacokinetic (PBPK) models of tamoxifen and its metabolites and performed virtual TARGET-1 studies. Our analyses indicated that the expected probability to achieve the end point (demonstrating the superior efficacy of the escalated tamoxifen dose over the standard dose in patients carrying CYP2D6 variants) was 0.469 on average. As the population size of this virtual clinical study (VCS) increased, the expected probability was substantially increased (0.674 for n = 260). Our analyses also informed that the probability to achieve the end point in the TARGET-1 study was negatively impacted by a large variability in endoxifen levels. Our current efforts demonstrate the promising utility of the PBPK modeling and VCS approaches in prospectively designing effective clinical trials. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  15. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    PubMed Central

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  16. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  17. Reliability, Validity, and Factor Structure of the Current Assessment Practice Evaluation-Revised (CAPER) in a National Sample.

    PubMed

    Lyon, Aaron R; Pullmann, Michael D; Dorsey, Shannon; Martin, Prerna; Grigore, Alexandra A; Becker, Emily M; Jensen-Doss, Amanda

    2018-05-11

    Measurement-based care (MBC) is an increasingly popular, evidence-based practice, but there are no tools with established psychometrics to evaluate clinician use of MBC practices in mental health service delivery. The current study evaluated the reliability, validity, and factor structure of scores generated from a brief, standardized tool to measure MBC practices, the Current Assessment Practice Evaluation-Revised (CAPER). Survey data from a national sample of 479 mental health clinicians were used to conduct exploratory and confirmatory factor analyses, as well as reliability and validity analyses (e.g., relationships between CAPER subscales and clinician MBC attitudes). Analyses revealed competing two- and three-factor models. Regardless of the model used, scores from CAPER subscales demonstrated good reliability and convergent and divergent validity with MBC attitudes in the expected directions. The CAPER appears to be a psychometrically sound tool for assessing clinician MBC practices. Future directions for development and application of the tool are discussed.

  18. A Single-Cell Roadmap of Lineage Bifurcation in Human ESC Models of Embryonic Brain Development.

    PubMed

    Yao, Zizhen; Mich, John K; Ku, Sherman; Menon, Vilas; Krostag, Anne-Rachel; Martinez, Refugio A; Furchtgott, Leon; Mulholland, Heather; Bort, Susan; Fuqua, Margaret A; Gregor, Ben W; Hodge, Rebecca D; Jayabalu, Anu; May, Ryan C; Melton, Samuel; Nelson, Angelique M; Ngo, N Kiet; Shapovalova, Nadiya V; Shehata, Soraya I; Smith, Michael W; Tait, Leah J; Thompson, Carol L; Thomsen, Elliot R; Ye, Chaoyang; Glass, Ian A; Kaykas, Ajamete; Yao, Shuyuan; Phillips, John W; Grimley, Joshua S; Levi, Boaz P; Wang, Yanling; Ramanathan, Sharad

    2017-01-05

    During human brain development, multiple signaling pathways generate diverse cell types with varied regional identities. Here, we integrate single-cell RNA sequencing and clonal analyses to reveal lineage trees and molecular signals underlying early forebrain and mid/hindbrain cell differentiation from human embryonic stem cells (hESCs). Clustering single-cell transcriptomic data identified 41 distinct populations of progenitor, neuronal, and non-neural cells across our differentiation time course. Comparisons with primary mouse and human gene expression data demonstrated rostral and caudal progenitor and neuronal identities from early brain development. Bayesian analyses inferred a unified cell-type lineage tree that bifurcates between cortical and mid/hindbrain cell types. Two methods of clonal analyses confirmed these findings and further revealed the importance of Wnt/β-catenin signaling in controlling this lineage decision. Together, these findings provide a rich transcriptome-based lineage map for studying human brain development and modeling developmental disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    PubMed

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  20. A Progressive Translational Mouse Model of Human VCP Disease: The VCP R155H/+ Mouse

    PubMed Central

    Nalbandian, Angèle; Llewellyn, Katrina J.; Badadani, Mallikarjun; Yin, Hong Z.; Nguyen, Christopher; Katheria, Veeral; Watts, Giles; Mukherjee, Jogeshwar; Vesa, Jouni; Caiozzo, Vincent; Mozaffar, Tahseen; Weiss, John H.; Kimonis, Virginia E.

    2012-01-01

    Introduction Mutations in the valosin containing protein (VCP) gene cause hereditary Inclusion Body Myopathy (hIBM) associated with Paget disease of bone (PDB), and frontotemporal dementia (FTD). More recently they have been linked to 2% of familial ALS cases. A knock-in mouse model offers the opportunity to study VCP-associated pathogenesis. Methods The VCPR155H/+ knock-in mouse model was assessed for muscle strength, immunohistochemical, Western, apoptosis, autophagy and MicroPET/CT imaging analyses. Results VCPR155H/+ mice developed significant progressive muscle weakness, and the quadriceps and brain developed progressive cytoplasmic accumulation of TDP-43, ubiquitin-positive inclusion bodies and increased LC3-II staining. MicroCT analyses revealed Paget-like lesions at the ends of long bones. Spinal cord demonstrated neurodegenerative changes, ubiquitin, and TDP-43 pathology of motor neurons. Discussion VCPR155H/+ knock-in mice represent an excellent pre-clinical model for understanding VCP-associated disease mechanisms and future treatments. PMID:23169451

  1. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  2. A necessarily complex model to explain the biogeography of the amphibians and reptiles of Madagascar.

    PubMed

    Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel

    2014-10-09

    Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.

  3. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  4. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    PubMed Central

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  5. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    PubMed

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  6. Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation

    PubMed Central

    Biggs, Matthew B.; Papin, Jason A.

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108

  7. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    PubMed

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  8. ArcFuels User Guide and Tutorial: for use with ArcGIS 9

    Treesearch

    Nicole M. Vaillant; Alan A. Ager; John Anderson; Lauren. Miller

    2013-01-01

    Fuel management planning can be a complex problem that is assisted by fire behavior modeling and geospatial analyses. Fuel management often is a particularly complicated process in which the benefits and potential impacts of fuel treatments need to be demonstrated in the context of land management goals and public expectations. Fire intensity, likelihood, and effects...

  9. Orderly Change in a Stable World: The Antisocial Trait as a Chimera.

    ERIC Educational Resources Information Center

    Patterson, Gerald R.

    1993-01-01

    Used longitudinal data from Oregon Youth Study to examine quantitative and qualitative change. Used latent growth models to demonstrate changes in form and systematic changes in mean level for subgroup of boys. Factor analyses carried out at three ages showed that, over time, changes in form and addition of new problems were quantifiable and thus…

  10. Quantum behaviour of open pumped and damped Bose-Hubbard trimers

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2018-01-01

    We propose and analyse analogs of optical cavities for atoms using three-well inline Bose-Hubbard models with pumping and losses. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a qualitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, although most do enter a steady-state regime. We find quadrature squeezing, bipartite and tripartite inseparability and entanglement, and states exhibiting the EPR paradox, depending on the parameter regimes. We also discover situations where the mean-field solutions of our models are noticeably different from the quantum solutions for the mean fields. Due to recent experimental advances, it should be possible to demonstrate the effects we predict and investigate in this article.

  11. Iced Aircraft Flight Data for Flight Simulator Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Blankenship, Kurt; Rieke, William; Brinker, David J.

    2003-01-01

    NASA is developing and validating technology to incorporate aircraft icing effects into a flight training device concept demonstrator. Flight simulation models of a DHC-6 Twin Otter were developed from wind tunnel data using a subscale, complete aircraft model with and without simulated ice, and from previously acquired flight data. The validation of the simulation models required additional aircraft response time histories of the airplane configured with simulated ice similar to the subscale model testing. Therefore, a flight test was conducted using the NASA Twin Otter Icing Research Aircraft. Over 500 maneuvers of various types were conducted in this flight test. The validation data consisted of aircraft state parameters, pilot inputs, propulsion, weight, center of gravity, and moments of inertia with the airplane configured with different amounts of simulated ice. Emphasis was made to acquire data at wing stall and tailplane stall since these events are of primary interest to model accurately in the flight training device. Analyses of several datasets are described regarding wing and tailplane stall. Key findings from these analyses are that the simulated wing ice shapes significantly reduced the C , max, while the simulated tail ice caused elevator control force anomalies and tailplane stall when flaps were deflected 30 deg or greater. This effectively reduced the safe operating margins between iced wing and iced tail stall as flap deflection and thrust were increased. This flight test demonstrated that the critical aspects to be modeled in the icing effects flight training device include: iced wing and tail stall speeds, flap and thrust effects, control forces, and control effectiveness.

  12. Spatial analysis of toxic emissions in LCA: a sub-continental nested USEtox model with freshwater archetypes.

    PubMed

    Kounina, Anna; Margni, Manuele; Shaked, Shanna; Bulle, Cécile; Jolliet, Olivier

    2014-08-01

    This paper develops continent-specific factors for the USEtox model and analyses the accuracy of different model architectures, spatial scales and archetypes in evaluating toxic impacts, with a focus on freshwater pathways. Inter-continental variation is analysed by comparing chemical fate and intake fractions between sub-continental zones of two life cycle impact assessment models: (1) the nested USEtox model parameterized with sub-continental zones and (2) the spatially differentiated IMPACTWorld model with 17 interconnected sub-continental regions. Substance residence time in water varies by up to two orders of magnitude among the 17 zones assessed with IMPACTWorld and USEtox, and intake fraction varies by up to three orders of magnitude. Despite this variation, the nested USEtox model succeeds in mimicking the results of the spatially differentiated model, with the exception of very persistent volatile pollutants that can be transported to polar regions. Intra-continental variation is analysed by comparing fate and intake fractions modelled with the a-spatial (one box) IMPACT Europe continental model vs. the spatially differentiated version of the same model. Results show that the one box model might overestimate chemical fate and characterisation factors for freshwater eco-toxicity of persistent pollutants by up to three orders of magnitude for point source emissions. Subdividing Europe into three archetypes, based on freshwater residence time (how long it takes water to reach the sea), improves the prediction of fate and intake fractions for point source emissions, bringing them within a factor five compared to the spatial model. We demonstrated that a sub-continental nested model such as USEtox, with continent-specific parameterization complemented with freshwater archetypes, can thus represent inter- and intra-continental spatial variations, whilst minimizing model complexity. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Effect of Vandetanib on Andes virus survival in the hamster model of Hantavirus pulmonary syndrome.

    PubMed

    Bird, Brian H; Shrivastava-Ranjan, Punya; Dodd, Kimberly A; Erickson, Bobbie R; Spiropoulou, Christina F

    2016-08-01

    Hantavirus pulmonary syndrome (HPS) is a severe disease caused by hantavirus infection of pulmonary microvascular endothelial cells leading to microvascular leakage, pulmonary edema, pleural effusion and high case fatality. Previously, we demonstrated that Andes virus (ANDV) infection caused up-regulation of vascular endothelial growth factor (VEGF) and concomitant downregulation of the cellular adhesion molecule VE-cadherin leading to increased permeability. Analyses of human HPS-patient sera have further demonstrated increased circulating levels of VEGF. Here we investigate the impact of a small molecule antagonist of the VEGF receptor 2 (VEGFR-2) activation in vitro, and overall impact on survival in the Syrian hamster model of HPS. Copyright © 2016. Published by Elsevier B.V.

  14. Promotion Models and Achievements of New-energy Automobiles in Shenzhen

    NASA Astrophysics Data System (ADS)

    Cai, Yu; Xiong, Siqin; Bai, Bo; Ma, Xiaoming

    2017-08-01

    As one of the pilot cities in China for demonstration and promotion of new-energy automobiles, Shenzhen, driven by the “two engines” of the government and the market, has made swift progress in promotion of its new-energy automobiles. This paper analyses Shenzhen’s governmental promotion policy concerning new-energy automobiles, summarizes Shenzhen’s commercial models for promoting new-energy automobiles, and is expected to provide reference for other provinces and cities to promote new-energy automobiles.

  15. Assessing cross-cultural differences through use of multiple-group invariance analyses.

    PubMed

    Stein, Judith A; Lee, Jerry W; Jones, Patricia S

    2006-12-01

    The use of structural equation modeling in cross-cultural personality research has become a popular method for testing measurement invariance. In this report, we present an example of testing measurement invariance using the Sense of Coherence Scale of Antonovsky (1993) in 3 ethnic groups: Chinese, Japanese, and Whites. In a series of increasingly restrictive constraints on the measurement models of the 3 groups, we demonstrate how to assess differences among the groups. We also provide an example of construct validation.

  16. Unsteady Aerodynamic Models for Turbomachinery Aeroelastic and Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Barnett, Mark; Ayer, Timothy C.

    1995-01-01

    Theoretical analyses and computer codes are being developed for predicting compressible unsteady inviscid and viscous flows through blade rows of axial-flow turbomachines. Such analyses are needed to determine the impact of unsteady flow phenomena on the structural durability and noise generation characteristics of the blading. The emphasis has been placed on developing analyses based on asymptotic representations of unsteady flow phenomena. Thus, high Reynolds number flows driven by small amplitude unsteady excitations have been considered. The resulting analyses should apply in many practical situations and lead to a better understanding of the relevant flow physics. In addition, they will be efficient computationally, and therefore, appropriate for use in aeroelastic and aeroacoustic design studies. Under the present effort, inviscid interaction and linearized inviscid unsteady flow models have been formulated, and inviscid and viscid prediction capabilities for subsonic steady and unsteady cascade flows have been developed. In this report, we describe the linearized inviscid unsteady analysis, LINFLO, the steady inviscid/viscid interaction analysis, SFLOW-IVI, and the unsteady viscous layer analysis, UNSVIS. These analyses are demonstrated via application to unsteady flows through compressor and turbine cascades that are excited by prescribed vortical and acoustic excitations and by prescribed blade vibrations. Recommendations are also given for the future research needed for extending and improving the foregoing asymptotic analyses, and to meet the goal of providing efficient inviscid/viscid interaction capabilities for subsonic and transonic unsteady cascade flows.

  17. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  18. Accounting for genotype uncertainty in the estimation of allele frequencies in autopolyploids.

    PubMed

    Blischak, Paul D; Kubatko, Laura S; Wolfe, Andrea D

    2016-05-01

    Despite the increasing opportunity to collect large-scale data sets for population genomic analyses, the use of high-throughput sequencing to study populations of polyploids has seen little application. This is due in large part to problems associated with determining allele copy number in the genotypes of polyploid individuals (allelic dosage uncertainty-ADU), which complicates the calculation of important quantities such as allele frequencies. Here, we describe a statistical model to estimate biallelic SNP frequencies in a population of autopolyploids using high-throughput sequencing data in the form of read counts. We bridge the gap from data collection (using restriction enzyme based techniques [e.g. GBS, RADseq]) to allele frequency estimation in a unified inferential framework using a hierarchical Bayesian model to sum over genotype uncertainty. Simulated data sets were generated under various conditions for tetraploid, hexaploid and octoploid populations to evaluate the model's performance and to help guide the collection of empirical data. We also provide an implementation of our model in the R package polyfreqs and demonstrate its use with two example analyses that investigate (i) levels of expected and observed heterozygosity and (ii) model adequacy. Our simulations show that the number of individuals sampled from a population has a greater impact on estimation error than sequencing coverage. The example analyses also show that our model and software can be used to make inferences beyond the estimation of allele frequencies for autopolyploids by providing assessments of model adequacy and estimates of heterozygosity. © 2015 John Wiley & Sons Ltd.

  19. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    USGS Publications Warehouse

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  20. Analysing urban resilience through alternative stormwater management options: application of the conceptual Spatial Decision Support System model at the neighbourhood scale.

    PubMed

    Balsells, M; Barroca, B; Amdal, J R; Diab, Y; Becue, V; Serre, D

    2013-01-01

    Recent changes in cities and their environments, caused by rapid urbanisation and climate change, have increased both flood probability and the severity of flooding. Consequently, there is a need for all cities to adapt to climate and socio-economic changes by developing new strategies for flood risk management. Following a risk paradigm shift from traditional to more integrated approaches, and considering the uncertainties of future urban development, one of the main emerging tasks for city managers becomes the development of resilient cities. However, the meaning of the resilience concept and its operability is still not clear. The goal of this research is to study how urban engineering and design disciplines can improve resilience to floods in urban neighbourhoods. This paper presents the conceptual Spatial Decision Support System (DS3) model which we consider a relevant tool to analyse and then implement resilience into neighbourhood design. Using this model, we analyse and discuss alternative stormwater management options at the neighbourhood scale in two specific areas: Rotterdam and New Orleans. The results obtained demonstrate that the DS3 model confirmed in its framework analysis that stormwater management systems can positively contribute to the improved flood resilience of a neighbourhood.

  1. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGES

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  2. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  3. Excellent amino acid racemization results from Holocene sand dollars

    NASA Astrophysics Data System (ADS)

    Kosnik, M.; Kaufman, D. S.; Kowalewski, M.; Whitacre, K.

    2015-12-01

    Amino acid racemization (AAR) is widely used as a cost-effective method to date molluscs in time-averaging and taphonomic studies, but it has not been attempted for echinoderms despite their paleobiological importance. Here we demonstrate the feasibility of AAR geochronology in Holocene aged Peronella peronii (Echinodermata: Echinoidea) collected from Sydney Harbour (Australia). Using standard HPLC methods we determined the extent of AAR in 74 Peronella tests and performed replicate analyses on 18 tests. We sampled multiple areas of two individuals and identified the outer edge as a good sampling location. Multiple replicate analyses from the outer edge of 18 tests spanning the observed range of D/Ls yielded median coefficients of variation < 4% for Asp, Phe, Ala, and Glu D/L values, which overlaps with the analytical precision. Correlations between D/L values across 155 HPLC injections sampled from 74 individuals are also very high (pearson r2 > 0.95) for these four amino acids. The ages of 11 individuals spanning the observed range of D/L values were determined using 14C analyses, and Bayesian model averaging was used to determine the best AAR age model. The averaged age model was mainly composed of time-dependent reaction kinetics models (TDK, 71%) based on phenylalanine (Phe, 94%). Modelled ages ranged from 14 to 5539 yrs, and the median 95% confidence interval for the 74 analysed individuals is ±28% of the modelled age. In comparison, the median 95% confidence interval for the 11 calibrated 14C ages was ±9% of the median age estimate. Overall Peronella yields exceptionally high-quality AAR D/L values and appears to be an excellent substrate for AAR geochronology. This work opens the way for time-averaging and taphonomic studies of echinoderms similar to those in molluscs.

  4. Linkage and related analyses of Barrett's esophagus and its associated adenocarcinomas.

    PubMed

    Sun, Xiangqing; Elston, Robert; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia I; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford; Barnholtz-Sloan, Jill S; Chandar, Apoorva; Brock, Wendy; Chak, Amitabh

    2016-07-01

    Familial aggregation and segregation analysis studies have provided evidence of a genetic basis for esophageal adenocarcinoma (EAC) and its premalignant precursor, Barrett's esophagus (BE). We aim to demonstrate the utility of linkage analysis to identify the genomic regions that might contain the genetic variants that predispose individuals to this complex trait (BE and EAC). We genotyped 144 individuals in 42 multiplex pedigrees chosen from 1000 singly ascertained BE/EAC pedigrees, and performed both model-based and model-free linkage analyses, using S.A.G.E. and other software. Segregation models were fitted, from the data on both the 42 pedigrees and the 1000 pedigrees, to determine parameters for performing model-based linkage analysis. Model-based and model-free linkage analyses were conducted in two sets of pedigrees: the 42 pedigrees and a subset of 18 pedigrees with female affected members that are expected to be more genetically homogeneous. Genome-wide associations were also tested in these families. Linkage analyses on the 42 pedigrees identified several regions consistently suggestive of linkage by different linkage analysis methods on chromosomes 2q31, 12q23, and 4p14. A linkage on 15q26 is the only consistent linkage region identified in the 18 female-affected pedigrees, in which the linkage signal is higher than in the 42 pedigrees. Other tentative linkage signals are also reported. Our linkage study of BE/EAC pedigrees identified linkage regions on chromosomes 2, 4, 12, and 15, with some reported associations located within our linkage peaks. Our linkage results can help prioritize association tests to delineate the genetic determinants underlying susceptibility to BE and EAC.

  5. Spirituality as a Scientific Construct: Testing Its Universality across Cultures and Languages

    PubMed Central

    MacDonald, Douglas A.; Friedman, Harris L.; Brewczynski, Jacek; Holland, Daniel; Salagame, Kiran Kumar K.; Mohan, K. Krishna; Gubrij, Zuzana Ondriasova; Cheong, Hye Wook

    2015-01-01

    Using data obtained from 4004 participants across eight countries (Canada, India, Japan, Korea, Poland, Slovakia, Uganda, and the U.S.), the factorial reliability, validity and structural/measurement invariance of a 30-item version of Expressions of Spirituality Inventory (ESI-R) was evaluated. The ESI-R measures a five factor model of spirituality developed through the conjoint factor analysis of several extant measures of spiritual constructs. Exploratory factor analyses of pooled data provided evidence that the five ESI-R factors are reliable. Confirmatory analyses comparing four and five factor models revealed that the five dimensional model demonstrates superior goodness-of-fit with all cultural samples and suggest that the ESI-R may be viewed as structurally invariant. Measurement invariance, however, was not supported as manifested in significant differences in item and dimension scores and in significantly poorer fit when factor loadings were constrained to equality across all samples. Exploratory analyses with a second adjective measure of spirituality using American, Indian, and Ugandan samples identified three replicable factors which correlated with ESI-R dimensions in a manner supportive of convergent validity. The paper concludes with a discussion of the meaning of the findings and directions needed for future research. PMID:25734921

  6. Structural vascular disease in Africans: Performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: The SABPA study.

    PubMed

    Botha, J; de Ridder, J H; Potgieter, J C; Steyn, H S; Malan, L

    2013-10-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fasting bloods (glucose, high density lipoprotein (HDL) and triglycerides) were obtained in a well-controlled setting. The RPWC male model (LR ROC AUC: 0.71, NN ROC AUC: 0.71) was practically equal to the JSC model (LR ROC AUC: 0.71, NN ROC AUC: 0.69) to predict structural vascular -disease. Similarly, the female RPWC model (LR ROC AUC: 0.84, NN ROC AUC: 0.82) and JSC model (LR ROC AUC: 0.82, NN ROC AUC: 0.81) equally predicted CIMT as surrogate marker for structural vascular disease. Odds ratios supported validity where prediction of CIMT revealed -clinical -significance, well over 1, for both the JSC and RPWC models in African males and females (OR 3.75-13.98). In conclusion, the proposed RPWC model was substantially validated utilizing linear and non-linear analyses. We therefore propose ethnic-specific WC cut points (African males, ≥90 cm; -females, ≥98 cm) to predict a surrogate marker for structural vascular disease. © J. A. Barth Verlag in Georg Thieme Verlag KG Stuttgart · New York.

  7. Virtual Interactive Musculoskeletal System (VIMS) in orthopaedic research, education and clinical patient care.

    PubMed

    Chao, Edmund Y S; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki

    2007-03-08

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation.

  8. Virtual interactive musculoskeletal system (VIMS) in orthopaedic research, education and clinical patient care

    PubMed Central

    Chao, Edmund YS; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki

    2007-01-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation. PMID:17343764

  9. Conceptualization and Assessment of Disengagement in Romantic Relationships

    PubMed Central

    Barry, Robin A.; Lawrence, Erika; Langer, Amie

    2008-01-01

    Research examining relationship distress and dissolution highlights the importance of romantic disengagement. However, prior conceptualizations and measures of romantic disengagement have tended to combine disengagement with related but distinct constructs hindering the study of romantic disengagement. In the present study we conducted exploratory factor analyses to demonstrate that disengagement is a relatively distinct construct and to clarify the conceptualization of romantic disengagement. More importantly, we developed a novel measure– the Romantic Disengagement Scale (RDS). The RDS demonstrated adequate fit across samples of dating individuals, married couples and women in physically aggressive relationships. The RDS also demonstrated strong divergent and incremental validity. Implications for enhancing conceptual models, research methodology, and clinical interventions are discussed. PMID:19727315

  10. A simple and exploratory way to determine the mean-variance relationship in generalized linear models.

    PubMed

    Tsou, Tsung-Shan

    2007-03-30

    This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.

  11. Crop status evaluations and yield predictions

    NASA Technical Reports Server (NTRS)

    Haun, J. R.

    1976-01-01

    One phase of the large area crop inventory project is presented. Wheat yield models based on the input of environmental variables potentially obtainable through the use of space remote sensing were developed and demonstrated. By the use of a unique method for visually qualifying daily plant development and subsequent multifactor computer analyses, it was possible to develop practical models for predicting crop development and yield. Development of wheat yield prediction models was based on the discovery that morphological changes in plants are detected and quantified on a daily basis, and that this change during a portion of the season was proportional to yield.

  12. Students' Ability to Apply Their Knowledge in a Gaming Exercise: An Exploratory Study

    ERIC Educational Resources Information Center

    Fuglseth, Anna Mette; Grønhaug, Kjell; Jörnsten, Kurt

    2018-01-01

    This paper reports on a study exploring master students' ability to apply their knowledge when solving an internal pricing problem in a supply chain. Analyses of 33 negotiation progress reports and 8 recordings of discussions demonstrate that most of the students were not able to apply relevant concepts and models to guide their handling of the…

  13. The Interpersonal Shame Inventory for Asian Americans: Scale Development and Psychometric Properties

    PubMed Central

    Wong, Y. Joel; Kim, Bryan S. K.; Nguyen, Chi P.; Cheng, Janice Ka Yan; Saw, Anne

    2016-01-01

    This article reports the development and psychometric properties of the Interpersonal Shame Inventory (ISI), a culturally salient and clinically relevant measure of interpersonal shame for Asian Americans. Across 4 studies involving Asian American college students, the authors provided evidence for this new measure’s validity and reliability. Exploratory factor analyses and confirmatory factor analyses provided support for a model with 2 correlated factors: external shame (arising from concerns about others’ negative evaluations) and family shame (arising from perceptions that one has brought shame to one’s family), corresponding to 2 subscales: ISI-E and ISI-F, respectively. Evidence for criterion-related, concurrent, discriminant, and incremental validity was demonstrated by testing the associations between external shame and family shame and immigration/international status, generic state shame, face concerns, thwarted belongingness, perceived burdensomeness, self-esteem, depressive symptoms, and suicide ideation. External shame and family shame also exhibited differential relations with other variables. Mediation findings were consistent with a model in which family shame mediated the effects of thwarted belongingness on suicide ideation. Further, the ISI subscales demonstrated high alpha coefficients and test–retest reliability. These findings are discussed in light of the conceptual, methodological, and clinical contributions of the ISI. PMID:24188650

  14. The macroevolutionary consequences of phenotypic integration: from development to deep time.

    PubMed

    Goswami, A; Smaers, J B; Soligo, C; Polly, P D

    2014-08-19

    Phenotypic integration is a pervasive characteristic of organisms. Numerous analyses have demonstrated that patterns of phenotypic integration are conserved across large clades, but that significant variation also exists. For example, heterochronic shifts related to different mammalian reproductive strategies are reflected in postcranial skeletal integration and in coordination of bone ossification. Phenotypic integration and modularity have been hypothesized to shape morphological evolution, and we extended simulations to confirm that trait integration can influence both the trajectory and magnitude of response to selection. We further demonstrate that phenotypic integration can produce both more and less disparate organisms than would be expected under random walk models by repartitioning variance in preferred directions. This effect can also be expected to favour homoplasy and convergent evolution. New empirical analyses of the carnivoran cranium show that rates of evolution, in contrast, are not strongly influenced by phenotypic integration and show little relationship to morphological disparity, suggesting that phenotypic integration may shape the direction of evolutionary change, but not necessarily the speed of it. Nonetheless, phenotypic integration is problematic for morphological clocks and should be incorporated more widely into models that seek to accurately reconstruct both trait and organismal evolution.

  15. The macroevolutionary consequences of phenotypic integration: from development to deep time

    PubMed Central

    Goswami, A.; Smaers, J. B.; Soligo, C.; Polly, P. D.

    2014-01-01

    Phenotypic integration is a pervasive characteristic of organisms. Numerous analyses have demonstrated that patterns of phenotypic integration are conserved across large clades, but that significant variation also exists. For example, heterochronic shifts related to different mammalian reproductive strategies are reflected in postcranial skeletal integration and in coordination of bone ossification. Phenotypic integration and modularity have been hypothesized to shape morphological evolution, and we extended simulations to confirm that trait integration can influence both the trajectory and magnitude of response to selection. We further demonstrate that phenotypic integration can produce both more and less disparate organisms than would be expected under random walk models by repartitioning variance in preferred directions. This effect can also be expected to favour homoplasy and convergent evolution. New empirical analyses of the carnivoran cranium show that rates of evolution, in contrast, are not strongly influenced by phenotypic integration and show little relationship to morphological disparity, suggesting that phenotypic integration may shape the direction of evolutionary change, but not necessarily the speed of it. Nonetheless, phenotypic integration is problematic for morphological clocks and should be incorporated more widely into models that seek to accurately reconstruct both trait and organismal evolution. PMID:25002699

  16. Brain Injury Alters Volatile Metabolome

    PubMed Central

    Cohen, Akiva S.; Gordon, Amy R.; Opiekun, Maryanne; Martin, Talia; Elkind, Jaclynn; Lundström, Johan N.; Beauchamp, Gary K.

    2016-01-01

    Chemical signals arising from body secretions and excretions communicate information about health status as have been reported in a range of animal models of disease. A potential common pathway for diseases to alter chemical signals is via activation of immune function—which is known to be intimately involved in modulation of chemical signals in several species. Based on our prior findings that both immunization and inflammation alter volatile body odors, we hypothesized that injury accompanied by inflammation might correspondingly modify the volatile metabolome to create a signature endophenotype. In particular, we investigated alteration of the volatile metabolome as a result of traumatic brain injury. Here, we demonstrate that mice could be trained in a behavioral assay to discriminate mouse models subjected to lateral fluid percussion injury from appropriate surgical sham controls on the basis of volatile urinary metabolites. Chemical analyses of the urine samples similarly demonstrated that brain injury altered urine volatile profiles. Behavioral and chemical analyses further indicated that alteration of the volatile metabolome induced by brain injury and alteration resulting from lipopolysaccharide-associated inflammation were not synonymous. Monitoring of alterations in the volatile metabolome may be a useful tool for rapid brain trauma diagnosis and for monitoring recovery. PMID:26926034

  17. Alcohol consumption and all-cause mortality.

    PubMed

    Duffy, J C

    1995-02-01

    Prospective studies of alcohol and mortality in middle-aged men almost universally find a U-shaped relationship between alcohol consumption and risk of mortality. This review demonstrates the extent to which different studies lead to different risk estimates, analyses the putative influence of abstention as a risk factor and uses available data to produce point and interval estimates of the consumption level apparently associated with minimum risk from two studies in the UK. Data from a number of studies are analysed by means of logistic-linear modelling, taking account of the possible influence of abstention as a special risk factor. Separate analysis of British data is performed. Logistic-linear modelling demonstrates large and highly significant differences between the studies considered in the relationship between alcohol consumption and all-cause mortality. The results support the identification of abstention as a special risk factor for mortality, but do not indicate that this alone explains the apparent U-shaped relationship. Separate analysis of two British studies indicates minimum risk of mortality in this population at a consumption level of about 26 (8.5 g) units of alcohol per week. The analysis supports the view that abstention may be a specific risk factor for all-cause mortality, but is not an adequate explanation of the apparent protective effect of alcohol consumption against all-cause mortality. Future analyses might better be performed on a case-by-case basis, using a change-point model to estimate the parameters of the relationship. The current misinterpretation of the sensible drinking level of 21 units per week for men in the UK as a limit is not justified, and the data suggest that alcohol consumption is a net preventive factor against premature death in this population.

  18. Mouse model of pulmonary cavitary tuberculosis and expression of matrix metalloproteinase-9.

    PubMed

    Ordonez, Alvaro A; Tasneen, Rokeya; Pokkali, Supriya; Xu, Ziyue; Converse, Paul J; Klunk, Mariah H; Mollura, Daniel J; Nuermberger, Eric L; Jain, Sanjay K

    2016-07-01

    Cavitation is a key pathological feature of human tuberculosis (TB), and is a well-recognized risk factor for transmission of infection, relapse after treatment and the emergence of drug resistance. Despite intense interest in the mechanisms underlying cavitation and its negative impact on treatment outcomes, there has been limited study of this phenomenon, owing in large part to the limitations of existing animal models. Although cavitation does not occur in conventional mouse strains after infection with Mycobacterium tuberculosis, cavitary lung lesions have occasionally been observed in C3HeB/FeJ mice. However, to date, there has been no demonstration that cavitation can be produced consistently enough to support C3HeB/FeJ mice as a new and useful model of cavitary TB. We utilized serial computed tomography (CT) imaging to detect pulmonary cavitation in C3HeB/FeJ mice after aerosol infection with M. tuberculosis Post-mortem analyses were performed to characterize lung lesions and to localize matrix metalloproteinases (MMPs) previously implicated in cavitary TB in situ A total of 47-61% of infected mice developed cavities during primary disease or relapse after non-curative treatments. Key pathological features of human TB, including simultaneous presence of multiple pathologies, were noted in lung tissues. Optical imaging demonstrated increased MMP activity in TB lesions and MMP-9 was significantly expressed in cavitary lesions. Tissue MMP-9 activity could be abrogated by specific inhibitors. In situ, three-dimensional analyses of cavitary lesions demonstrated that 22.06% of CD11b+ signal colocalized with MMP-9. C3HeB/FeJ mice represent a reliable, economical and tractable model of cavitary TB, with key similarities to human TB. This model should provide an excellent tool to better understand the pathogenesis of cavitation and its effects on TB treatments. © 2016. Published by The Company of Biologists Ltd.

  19. Mouse model of pulmonary cavitary tuberculosis and expression of matrix metalloproteinase-9

    PubMed Central

    Ordonez, Alvaro A.; Tasneen, Rokeya; Pokkali, Supriya; Xu, Ziyue; Converse, Paul J.; Klunk, Mariah H.; Mollura, Daniel J.; Nuermberger, Eric L.

    2016-01-01

    ABSTRACT Cavitation is a key pathological feature of human tuberculosis (TB), and is a well-recognized risk factor for transmission of infection, relapse after treatment and the emergence of drug resistance. Despite intense interest in the mechanisms underlying cavitation and its negative impact on treatment outcomes, there has been limited study of this phenomenon, owing in large part to the limitations of existing animal models. Although cavitation does not occur in conventional mouse strains after infection with Mycobacterium tuberculosis, cavitary lung lesions have occasionally been observed in C3HeB/FeJ mice. However, to date, there has been no demonstration that cavitation can be produced consistently enough to support C3HeB/FeJ mice as a new and useful model of cavitary TB. We utilized serial computed tomography (CT) imaging to detect pulmonary cavitation in C3HeB/FeJ mice after aerosol infection with M. tuberculosis. Post-mortem analyses were performed to characterize lung lesions and to localize matrix metalloproteinases (MMPs) previously implicated in cavitary TB in situ. A total of 47-61% of infected mice developed cavities during primary disease or relapse after non-curative treatments. Key pathological features of human TB, including simultaneous presence of multiple pathologies, were noted in lung tissues. Optical imaging demonstrated increased MMP activity in TB lesions and MMP-9 was significantly expressed in cavitary lesions. Tissue MMP-9 activity could be abrogated by specific inhibitors. In situ, three-dimensional analyses of cavitary lesions demonstrated that 22.06% of CD11b+ signal colocalized with MMP-9. C3HeB/FeJ mice represent a reliable, economical and tractable model of cavitary TB, with key similarities to human TB. This model should provide an excellent tool to better understand the pathogenesis of cavitation and its effects on TB treatments. PMID:27482816

  20. Model invariance across genders of the Broad Autism Phenotype Questionnaire.

    PubMed

    Broderick, Neill; Wade, Jordan L; Meyer, J Patrick; Hull, Michael; Reeve, Ronald E

    2015-10-01

    ASD is one of the most heritable neuropsychiatric disorders, though comprehensive genetic liability remains elusive. To facilitate genetic research, researchers employ the concept of the broad autism phenotype (BAP), a milder presentation of traits in undiagnosed relatives. Research suggests that the BAP Questionnaire (BAPQ) demonstrates psychometric properties superior to other self-report measures. To examine evidence regarding validity of the BAPQ, the current study used confirmatory factor analysis to test the assumption of model invariance across genders. Results of the current study upheld model invariance at each level of parameter constraint; however, model fit indices suggested limited goodness-of-fit between the proposed model and the sample. Exploratory analyses investigated alternate factor structure models but ultimately supported the proposed three-factor structure model.

  1. 2000-hour cyclic endurance test of a laboratory model multipropellant resistojet

    NASA Technical Reports Server (NTRS)

    Morren, W. Earl; Sovey, James S.

    1987-01-01

    The technological readiness of a long-life multipropellant resistojet for space station auxiliary propulsion is demonstrated. A laboratory model resistojet made from grain-stabilized platinum served as a test bed to evaluate the design characteristics, fabrication methods, and operating strategies for an engineering model multipropellant resistojet developed under contract by the Rocketdyne Division of Rockwell International and Technion Incorporated. The laboratory model thruster was subjected to a 2000-hr, 2400-thermal-cycle endurance test using carbon dioxide propellant. Maximum thruster temperatures were approximately 1400 C. The post-test analyses of the laboratory model thruster included an investigation of component microstructures. Significant observations from the laboratory model thruster are discussed as they relate to the design of the engineering model thruster.

  2. A 2000-hour cyclic endurance test of a laboratory model multipropellant resistojet

    NASA Technical Reports Server (NTRS)

    Morren, W. Earl; Sovey, James S.

    1987-01-01

    The technological readiness of a long-life multipropellant resistojet for space station auxiliary propulsion is demonstrated. A laboratory model resistojet made from grain-stabilized platinum served as a test bed to evaluate the design characteristics, fabrication methods, and operating strategies for an engineering model multipropellant resistojet developed under contract by the Rocketdyne Division of Rockwell International and Technion Incorporated. The laboratory model thruster was subjected to a 2000-hr, 2400-thermal-cycle endurance test using carbon dioxide propellant. Maximum thruster temperatures were approximately 1400 C. The post-test analyses of the laboratory model thruster included an investigation of component microstructures. Significant observations from the laboratory model thruster are discussed as they relate to the design of the engineering model thruster.

  3. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    PubMed

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Rough set classification based on quantum logic

    NASA Astrophysics Data System (ADS)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  5. Multicriteria framework for selecting a process modelling language

    NASA Astrophysics Data System (ADS)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  6. Daily occupational stressors and marital behavior.

    PubMed

    Story, Lisa B; Repetti, Rena

    2006-12-01

    This study examined daily fluctuations in marital behavior (anger and withdrawal) as a function of same-day job stressors, using hierarchical linear modeling (HLM). Forty-three couples provided daily diary reports of their workload and negative social interactions at work on 5 consecutive days. Within-subject analyses demonstrate that husbands and wives reported greater marital anger and withdrawal following negative social interactions at work, and wives reported greater marital anger and withdrawal following days of heavy workload. Mediation analyses provide support for the negative mood spillover hypothesis (e.g., workload no longer predicted wives' marital anger when controlling for negative mood). Between-subjects analyses suggest that spouses in high-conflict families may be especially vulnerable to the effects of job stressors on marital interaction. (c) 2006 APA, all rights reserved.

  7. Manned systems utilization analysis (study 2.1). Volume 3: LOVES computer simulations, results, and analyses

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1975-01-01

    The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.

  8. Advanced Behavioral Analyses Show that the Presence of Food Causes Subtle Changes in C. elegans Movement.

    PubMed

    Angstman, Nicholas B; Frank, Hans-Georg; Schmitz, Christoph

    2016-01-01

    As a widely used and studied model organism, Caenorhabditis elegans worms offer the ability to investigate implications of behavioral change. Although, investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plates with an E. coli lawn or with no lawn. Using an advanced software, WormLab, the full skeleton and outline of worms were tracked to determine whether the presence of food affects behavioral traits. In all seven investigated parameters, statistically significant differences were found in worm behavior between those moving on NGM-agar plates with an E. coli lawn and NGM-agar plates with no lawn. Furthermore, multiple test groups showed differences in interaction between variables as the parameters that significantly correlated statistically with speed of locomotion varied. In the present study, we demonstrate the validity of a model to analyze C. elegans behavior beyond simple speed of locomotion. The need to account for a nested design while performing statistical analyses in similar studies is also demonstrated. With extended analyses, C. elegans behavioral change can be investigated with greater sensitivity, which could have wide utility in fields such as, but not limited to, toxicology, drug discovery, and RNAi screening.

  9. Economic lot sizing in a production system with random demand

    NASA Astrophysics Data System (ADS)

    Lee, Shine-Der; Yang, Chin-Ming; Lan, Shu-Chuan

    2016-04-01

    An extended economic production quantity model that copes with random demand is developed in this paper. A unique feature of the proposed study is the consideration of transient shortage during the production stage, which has not been explicitly analysed in existing literature. The considered costs include set-up cost for the batch production, inventory carrying cost during the production and depletion stages in one replenishment cycle, and shortage cost when demand cannot be satisfied from the shop floor immediately. Based on renewal reward process, a per-unit-time expected cost model is developed and analysed. Under some mild condition, it can be shown that the approximate cost function is convex. Computational experiments have demonstrated that the average reduction in total cost is significant when the proposed lot sizing policy is compared with those with deterministic demand.

  10. 40 CFR 270.63 - Permits for land treatment demonstrations using field test or laboratory analyses.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... demonstrations using field test or laboratory analyses. 270.63 Section 270.63 Protection of Environment... using field test or laboratory analyses. (a) For the purpose of allowing an owner or operator to meet... the field test or laboratory analyses, or as a two-phase facility permit covering the field tests, or...

  11. 40 CFR 270.63 - Permits for land treatment demonstrations using field test or laboratory analyses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... demonstrations using field test or laboratory analyses. 270.63 Section 270.63 Protection of Environment... using field test or laboratory analyses. (a) For the purpose of allowing an owner or operator to meet... the field test or laboratory analyses, or as a two-phase facility permit covering the field tests, or...

  12. 40 CFR 270.63 - Permits for land treatment demonstrations using field test or laboratory analyses.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... demonstrations using field test or laboratory analyses. 270.63 Section 270.63 Protection of Environment... using field test or laboratory analyses. (a) For the purpose of allowing an owner or operator to meet... the field test or laboratory analyses, or as a two-phase facility permit covering the field tests, or...

  13. 40 CFR 270.63 - Permits for land treatment demonstrations using field test or laboratory analyses.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... demonstrations using field test or laboratory analyses. 270.63 Section 270.63 Protection of Environment... using field test or laboratory analyses. (a) For the purpose of allowing an owner or operator to meet... the field test or laboratory analyses, or as a two-phase facility permit covering the field tests, or...

  14. 40 CFR 270.63 - Permits for land treatment demonstrations using field test or laboratory analyses.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... demonstrations using field test or laboratory analyses. 270.63 Section 270.63 Protection of Environment... using field test or laboratory analyses. (a) For the purpose of allowing an owner or operator to meet... the field test or laboratory analyses, or as a two-phase facility permit covering the field tests, or...

  15. Adaptive cyclic physiologic noise modeling and correction in functional MRI.

    PubMed

    Beall, Erik B

    2010-03-30

    Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  16. Investigation of energy transport in DIII-D high- β P EAST-demonstration discharges with the TGLF turbulent and NEO neoclassical transport models [Investigation of energy transport in DIII-D high- β P EAST-demonstration discharges with turbulent and neoclassical transport models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Chengkang; Staebler, Gary M.; Lao, Lang L.

    Here, energy transport analyses of DIII-D high-β P EAST-demonstration discharges have been performed using the TGYRO transport package with TGLF turbulent and NEO neoclassical transport models under the OMFIT integrated modeling framework. Ion energy transport is shown to be dominated by neoclassical transport and ion temperature profiles predicted by TGYRO agree closely with the experimental measured profiles for these high-β P discharges. Ion energy transport is largely insensitive to reductions in the E × B flow shear stabilization. The Shafranov shift is shown to play a role in the suppression of the ion turbulent energy transport below the neoclassical level.more » Electron turbulent energy transport is under-predicted by TGLF and a significant shortfall in the electron energy transport over the whole core plasma is found with TGLF predictions for these high-β P discharges. TGYRO can successfully predict the experimental ion and electron temperature profiles by artificially increasing the saturated turbulence level for ETG driven modes used in TGLF.« less

  17. Investigation of energy transport in DIII-D high- β P EAST-demonstration discharges with the TGLF turbulent and NEO neoclassical transport models [Investigation of energy transport in DIII-D high- β P EAST-demonstration discharges with turbulent and neoclassical transport models

    DOE PAGES

    Pan, Chengkang; Staebler, Gary M.; Lao, Lang L.; ...

    2017-01-11

    Here, energy transport analyses of DIII-D high-β P EAST-demonstration discharges have been performed using the TGYRO transport package with TGLF turbulent and NEO neoclassical transport models under the OMFIT integrated modeling framework. Ion energy transport is shown to be dominated by neoclassical transport and ion temperature profiles predicted by TGYRO agree closely with the experimental measured profiles for these high-β P discharges. Ion energy transport is largely insensitive to reductions in the E × B flow shear stabilization. The Shafranov shift is shown to play a role in the suppression of the ion turbulent energy transport below the neoclassical level.more » Electron turbulent energy transport is under-predicted by TGLF and a significant shortfall in the electron energy transport over the whole core plasma is found with TGLF predictions for these high-β P discharges. TGYRO can successfully predict the experimental ion and electron temperature profiles by artificially increasing the saturated turbulence level for ETG driven modes used in TGLF.« less

  18. Cost Analyses in the US and Japan: A Cross-Country Comparative Analysis Applied to the PRONOUNCE Trial in Non-Squamous Non-Small Cell Lung Cancer.

    PubMed

    Hess, Lisa M; Rajan, Narayan; Winfree, Katherine; Davey, Peter; Ball, Mark; Knox, Hediyyih; Graham, Christopher

    2015-12-01

    Health technology assessment is not required for regulatory submission or approval in either the United States (US) or Japan. This study was designed as a cross-country evaluation of cost analyses conducted in the US and Japan based on the PRONOUNCE phase III lung cancer trial, which compared pemetrexed plus carboplatin followed by pemetrexed (PemC) versus paclitaxel plus carboplatin plus bevacizumab followed by bevacizumab (PCB). Two cost analyses were conducted in accordance with International Society For Pharmacoeconomics and Outcomes Research good research practice standards. Costs were obtained based on local pricing structures; outcomes were considered equivalent based on the PRONOUNCE trial results. Other inputs were included from the trial data (e.g., toxicity rates) or from local practice sources (e.g., toxicity management). The models were compared across key input and transferability factors. Despite differences in local input data, both models demonstrated a similar direction, with the cost of PemC being consistently lower than the cost of PCB. The variation in individual input parameters did affect some of the specific categories, such as toxicity, and impacted sensitivity analyses, with the cost differential between comparators being greater in Japan than in the US. When economic models are based on clinical trial data, many inputs and outcomes are held consistent. The alterable inputs were not in and of themselves large enough to significantly impact the results between countries, which were directionally consistent with greater variation seen in sensitivity analyses. The factors that vary across jurisdictions, even when minor, can have an impact on trial-based economic analyses. Eli Lilly and Company.

  19. Assessing the ability of potential evapotranspiration models in capturing dynamics of evaporative demand across various biomes and climatic regimes with ChinaFLUX measurements

    NASA Astrophysics Data System (ADS)

    Zheng, Han; Yu, Guirui; Wang, Qiufeng; Zhu, Xianjin; Yan, Junhua; Wang, Huimin; Shi, Peili; Zhao, Fenghua; Li, Yingnian; Zhao, Liang; Zhang, Junhui; Wang, Yanfen

    2017-08-01

    Estimates of atmospheric evaporative demand have been widely required for a variety of hydrological analyses, with potential evapotranspiration (PET) being an important measure representing evaporative demand of actual vegetated surfaces under given metrological conditions. In this study, we assessed the ability of various PET models in capturing long-term (typically 2003-2011) dynamics of evaporative demand at eight ecosystems across various biomes and climatic regimes in China. Prior to assessing PET dynamics, we first examined the reasonability of fourteen PET models in representing the magnitudes of evaporative demand using eddy-covariance actual evapotranspiration (AET) as an indicator. Results showed that the robustness of the fourteen PET models differed somewhat across the sites, and only three PET models could produce reasonable magnitudes of evaporative demand (i.e., PET ≥ AET on average) for all eight sites, including the: (i) Penman; (ii) Priestly-Taylor and (iii) Linacre models. Then, we assessed the ability of these three PET models in capturing dynamics of evaporative demand by comparing the annual and seasonal trends in PET against the equivalent trends in AET and precipitation (P) for particular sites. Results indicated that nearly all the three PET models could faithfully reproduce the dynamics in evaporative demand for the energy-limited conditions at both annual and seasonal scales, while only the Penman and Linacre models could represent dynamics in evaporative demand for the water-limited conditions. However, the Linacre model was unable to reproduce the seasonal switches between water- and energy-limited states for some sites. Our findings demonstrated that the choice of PET models would be essential for the evaporative demand analyses and other related hydrological analyses at different temporal and spatial scales.

  20. The calibration of a model for simulating the thermal and electrical performance of a 2.8 kW AC solid-oxide fuel cell micro-cogeneration device

    NASA Astrophysics Data System (ADS)

    Beausoleil-Morrison, Ian; Lombardi, Kathleen

    The concurrent production of heat and electricity within residential buildings using solid-oxide fuel cell (SOFC) micro-cogeneration devices has the potential to reduce primary energy consumption, greenhouse gas emissions, and air pollutants. A realistic assessment of this emerging technology requires the accurate simulation of the thermal and electrical production of SOFC micro-cogeneration devices concurrent with the simulation of the building, its occupants, and coupled plant components. The calibration of such a model using empirical data gathered from experiments conducted with a 2.8 kW AC SOFC micro-cogeneration device is demonstrated. The experimental configuration, types of instrumentation employed, and the operating scenarios examined are treated. The propagation of measurement uncertainty into the derived quantities that are necessary for model calibration are demonstrated by focusing upon the SOFC micro-cogeneration system's gas-to-water heat exchanger. The calibration coefficients necessary to accurately simulate the thermal and electrical performance of this prototype device are presented and the types of analyses enabled to study the potential of the technology are demonstrated.

  1. Preliminary Results Obtained in Integrated Safety Analysis of NASA Aviation Safety Program Technologies

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.

    2003-01-01

    The goal of the NASA Aviation Safety Program (AvSP) is to develop and demonstrate technologies that contribute to a reduction in the aviation fatal accident rate by a factor of 5 by the year 2007 and by a factor of 10 by the year 2022. Integrated safety analysis of day-to-day operations and risks within those operations will provide an understanding of the Aviation Safety Program portfolio. Safety benefits analyses are currently being conducted. Preliminary results for the Synthetic Vision Systems (SVS) and Weather Accident Prevention (WxAP) projects of the AvSP have been completed by the Logistics Management Institute under a contract with the NASA Glenn Research Center. These analyses include both a reliability analysis and a computer simulation model. The integrated safety analysis method comprises two principal components: a reliability model and a simulation model. In the reliability model, the results indicate how different technologies and systems will perform in normal, degraded, and failed modes of operation. In the simulation, an operational scenario is modeled. The primary purpose of the SVS project is to improve safety by providing visual-flightlike situation awareness during instrument conditions. The current analyses are an estimate of the benefits of SVS in avoiding controlled flight into terrain. The scenario modeled has an aircraft flying directly toward a terrain feature. When the flight crew determines that the aircraft is headed toward an obstruction, the aircraft executes a level turn at speed. The simulation is ended when the aircraft completes the turn.

  2. The CAFE model: A net production model for global ocean phytoplankton

    NASA Astrophysics Data System (ADS)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  3. Computational and empirical simulations of selective memory impairments: Converging evidence for a single-system account of memory dissociations.

    PubMed

    Curtis, Evan T; Jamieson, Randall K

    2018-04-01

    Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.

  4. Seasonal forecast of St. Louis encephalitis virus transmission, Florida.

    PubMed

    Shaman, Jeffrey; Day, Jonathan F; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-05-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empiric relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill-verification analyses may be applied to test the predictability of an empiric disease forecast model.

  5. Seasonal Forecast of St. Louis Encephalitis Virus Transmission, Florida

    PubMed Central

    Day, Jonathan F.; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-01-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill verification analyses may be applied to test the predictability of an empirical disease forecast model. PMID:15200812

  6. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  7. Cure frailty models for survival data: application to recurrences for breast cancer and to hospital readmissions for colorectal cancer.

    PubMed

    Rondeau, Virginie; Schaffner, Emmanuel; Corbière, Fabien; Gonzalez, Juan R; Mathoulin-Pélissier, Simone

    2013-06-01

    Owing to the natural evolution of a disease, several events often arise after a first treatment for the same subject. For example, patients with a primary invasive breast cancer and treated with breast conserving surgery may experience breast cancer recurrences, metastases or death. A certain proportion of subjects in the population who are not expected to experience the events of interest are considered to be 'cured' or non-susceptible. To model correlated failure time data incorporating a surviving fraction, we compare several forms of cure rate frailty models. In the first model already proposed non-susceptible patients are those who are not expected to experience the event of interest over a sufficiently long period of time. The other proposed models account for the possibility of cure after each event. We illustrate the cure frailty models with two data sets. First to analyse time-dependent prognostic factors associated with breast cancer recurrences, metastases, new primary malignancy and death. Second to analyse successive rehospitalizations of patients diagnosed with colorectal cancer. Estimates were obtained by maximization of likelihood using SAS proc NLMIXED for a piecewise constant hazards model. As opposed to the simple frailty model, the proposed methods demonstrate great potential in modelling multivariate survival data with long-term survivors ('cured' individuals).

  8. IMPACT: a generic tool for modelling and simulating public health policy.

    PubMed

    Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E

    2011-01-01

    Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.

  9. New substitution models for rooting phylogenetic trees.

    PubMed

    Williams, Tom A; Heaps, Sarah E; Cherlin, Svetlana; Nye, Tom M W; Boys, Richard J; Embley, T Martin

    2015-09-26

    The root of a phylogenetic tree is fundamental to its biological interpretation, but standard substitution models do not provide any information on its position. Here, we describe two recently developed models that relax the usual assumptions of stationarity and reversibility, thereby facilitating root inference without the need for an outgroup. We compare the performance of these models on a classic test case for phylogenetic methods, before considering two highly topical questions in evolutionary biology: the deep structure of the tree of life and the root of the archaeal radiation. We show that all three alignments contain meaningful rooting information that can be harnessed by these new models, thus complementing and extending previous work based on outgroup rooting. In particular, our analyses exclude the root of the tree of life from the eukaryotes or Archaea, placing it on the bacterial stem or within the Bacteria. They also exclude the root of the archaeal radiation from several major clades, consistent with analyses using other rooting methods. Overall, our results demonstrate the utility of non-reversible and non-stationary models for rooting phylogenetic trees, and identify areas where further progress can be made. © 2015 The Authors.

  10. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    PubMed

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Constraints on operator ordering from third quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkuwa, Yoshiaki; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Ezawa, Yasuo

    2016-02-15

    In this paper, we analyse the Wheeler–DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.

  12. An atomic finite element model for biodegradable polymers. Part 2. A model for change in Young's modulus due to polymer chain scission.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Kruft, Marc-Anton

    2015-11-01

    Atomic simulations were undertaken to analyse the effect of polymer chain scission on amorphous poly(lactide) during degradation. Many experimental studies have analysed mechanical properties degradation but relatively few computation studies have been conducted. Such studies are valuable for supporting the design of bioresorbable medical devices. Hence in this paper, an Effective Cavity Theory for the degradation of Young's modulus was developed. Atomic simulations indicated that a volume of reduced-stiffness polymer may exist around chain scissions. In the Effective Cavity Theory, each chain scission is considered to instantiate an effective cavity. Finite Element Analysis simulations were conducted to model the effect of the cavities on Young's modulus. Since polymer crystallinity affects mechanical properties, the effect of increases in crystallinity during degradation on Young's modulus is also considered. To demonstrate the ability of the Effective Cavity Theory, it was fitted to several sets of experimental data for Young's modulus in the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A Bayesian network model for predicting type 2 diabetes risk based on electronic health records

    NASA Astrophysics Data System (ADS)

    Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen

    2017-07-01

    An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.

  14. The Impact of Satellite-Derived Land Surface Temperatures on Numerical Weather Prediction Analyses and Forecasts

    NASA Astrophysics Data System (ADS)

    Candy, B.; Saunders, R. W.; Ghent, D.; Bulgin, C. E.

    2017-09-01

    Land surface temperature (LST) observations from a variety of satellite instruments operating in the infrared have been compared to estimates of surface temperature from the Met Office operational numerical weather prediction (NWP) model. The comparisons show that during the day the NWP model can underpredict the surface temperature by up to 10 K in certain regions such as the Sahel and southern Africa. By contrast at night the differences are generally smaller. Matchups have also been performed between satellite LSTs and observations from an in situ radiometer located in Southern England within a region of mixed land use. These matchups demonstrate good agreement at night and suggest that the satellite uncertainties in LST are less than 2 K. The Met Office surface analysis scheme has been adapted to utilize nighttime LST observations. Experiments using these analyses in an NWP model have shown a benefit to the resulting forecasts of near-surface air temperature, particularly over Africa.

  15. Modelling malaria control by introduction of larvivorous fish.

    PubMed

    Lou, Yijun; Zhao, Xiao-Qiang

    2011-10-01

    Malaria creates serious health and economic problems which call for integrated management strategies to disrupt interactions among mosquitoes, the parasite and humans. In order to reduce the intensity of malaria transmission, malaria vector control may be implemented to protect individuals against infective mosquito bites. As a sustainable larval control method, the use of larvivorous fish is promoted in some circumstances. To evaluate the potential impacts of this biological control measure on malaria transmission, we propose and investigate a mathematical model describing the linked dynamics between the host-vector interaction and the predator-prey interaction. The model, which consists of five ordinary differential equations, is rigorously analysed via theories and methods of dynamical systems. We derive four biologically plausible and insightful quantities (reproduction numbers) that completely determine the community composition. Our results suggest that the introduction of larvivorous fish can, in principle, have important consequences for malaria dynamics, but also indicate that this would require strong predators on larval mosquitoes. Integrated strategies of malaria control are analysed to demonstrate the biological application of our developed theory.

  16. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  17. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  18. Meta-analyses on intra-aortic balloon pump in cardiogenic shock complicating acute myocardial infarction may provide biased results.

    PubMed

    Acconcia, M C; Caretta, Q; Romeo, F; Borzi, M; Perrone, M A; Sergi, D; Chiarotti, F; Calabrese, C M; Sili Scavalli, A; Gaudio, C

    2018-04-01

    Intra-aortic balloon pump (IABP) is the device most commonly investigated in patients with cardiogenic shock (CS) complicating acute myocardial infarction (AMI). Recently meta-analyses on this topic showed opposite results: some complied with the actual guideline recommendations, while others did not, due to the presence of bias. We investigated the reasons for the discrepancy among meta-analyses and strategies employed to avoid the potential source of bias. Scientific databases were searched for meta-analyses of IABP support in AMI complicated by CS. The presence of clinical diversity, methodological diversity and statistical heterogeneity were analyzed. When we found clinical or methodological diversity, we reanalyzed the data by comparing the patients selected for homogeneous groups. When the fixed effect model was employed despite the presence of statistical heterogeneity, the meta-analysis was repeated adopting the random effect model, with the same estimator used in the original meta-analysis. Twelve meta-analysis were selected. Six meta-analyses of randomized controlled trials (RCTs) were inconclusive because underpowered to detect the IABP effect. Five included RCTs and observational studies (Obs) and one only Obs. Some meta-analyses on RCTs and Obs had biased results due to presence of clinical and/or methodological diversity. The reanalysis of data reallocated for homogeneous groups was no more in contrast with guidelines recommendations. Meta-analyses performed without controlling for clinical and/or methodological diversity, represent a confounding message against a good clinical practice. The reanalysis of data demonstrates the validity of the current guidelines recommendations in addressing clinical decision making in providing IABP support in AMI complicated by CS.

  19. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  20. Data Mashups: Linking Human Health and Wellbeing with Weather, Climate and the Environment

    NASA Astrophysics Data System (ADS)

    Fleming, L. E.; Sarran, C.; Golding, B.; Haines, A.; Kessel, A.; Djennad, M.; Hajat, S.; Nichols, G.; Gordon Brown, H.; Depledge, M.

    2016-12-01

    A large part of the global disease burden can be linked to environmental factors, underpinned by unhealthy behaviours. Research into these linkages suffers from lack of common tools and databases for investigations across many different scientific disciplines to explore these complex associations. The MEDMI (Medical and Environmental Data-a Mash-up Infrastructure) Partnership brings together leading organisations and researchers in climate, weather, environment, and human health. We have created a proof-of-concept central data and analysis system with the UK Met Office and Public Health England data as the internet-based MEDMI Platform (www.data-mashup.org.uk) to serve as a common resource for researchers to link and analyse complex meteorological, environmental and epidemiological data in the UK. The Platform is hosted on its own dedicated server, with secure internet and in-person access with appropriate safeguards for ethical, copyright, security, preservation, and data sharing issues. Via the Platform, there is a demonstration Browser Application with access to user-selected subsets of the data for: a) analyses using time series (e.g. mortality/environmental variables), and b) data visualizations (e.g. infectious diseases/environmental variables). One demonstration project is linking climate change, harmful algal blooms and oceanographic modelling building on the hydrodynamic-biogeochemical coupled models; in situ and satellite observations as well as UK HAB data and hospital episode statistics data are being used for model verification and future forecasting. The MEDMI Project provides a demonstration of the potential, barriers and challenges, of these "data mashups" of environment and health data. Although there remain many challenges to creating and sustaining such a shared resource, these activities and resources are essential to truly explore the complex interactions between climate and other environmental change and health at the local and global scale.

  1. Does asymmetric correlation affect portfolio optimization?

    NASA Astrophysics Data System (ADS)

    Fryd, Lukas

    2017-07-01

    The classical portfolio optimization problem does not assume asymmetric behavior of relationship among asset returns. The existence of asymmetric response in correlation on the bad news could be important information in portfolio optimization. The paper applies Dynamic conditional correlation model (DCC) and his asymmetric version (ADCC) to propose asymmetric behavior of conditional correlation. We analyse asymmetric correlation among S&P index, bonds index and spot gold price before mortgage crisis in 2008. We evaluate forecast ability of the models during and after mortgage crisis and demonstrate the impact of asymmetric correlation on the reduction of portfolio variance.

  2. Energy Conversion Advanced Heat Transport Loop and Power Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, C. H.

    2006-08-01

    The Department of Energy and the Idaho National Laboratory are developing a Next Generation Nuclear Plant (NGNP) to serve as a demonstration of state-of-the-art nuclear technology. The purpose of the demonstration is two fold 1) efficient low cost energy generation and 2) hydrogen production. Although a next generation plant could be developed as a single-purpose facility, early designs are expected to be dual-purpose. While hydrogen production and advanced energy cycles are still in its early stages of development, research towards coupling a high temperature reactor, electrical generation and hydrogen production is under way. Many aspects of the NGNP must bemore » researched and developed in order to make recommendations on the final design of the plant. Parameters such as working conditions, cycle components, working fluids, and power conversion unit configurations must be understood. Three configurations of the power conversion unit were demonstrated in this study. A three-shaft design with 3 turbines and 4 compressors, a combined cycle with a Brayton top cycle and a Rankine bottoming cycle, and a reheated cycle with 3 stages of reheat were investigated. An intermediate heat transport loop for transporting process heat to a High Temperature Steam Electrolysis (HTSE) hydrogen production plant was used. Helium, CO2, and an 80% nitrogen, 20% helium mixture (by weight) were studied to determine the best working fluid in terms cycle efficiency and development cost. In each of these configurations the relative component size were estimated for the different working fluids. The relative size of the turbomachinery was measured by comparing the power input/output of the component. For heat exchangers the volume was computed and compared. Parametric studies away from the baseline values of the three-shaft and combined cycles were performed to determine the effect of varying conditions in the cycle. This gives some insight into the sensitivity of these cycles to various operating conditions as well as trade offs between efficiency and capital cost. Prametric studies were carried out on reactor outlet temperature, mass flow, pressure, and turbine cooling. Recommendations on the optimal working fluid for each configuration were made. A steady state model comparison was made with a Closed Brayton Cycle (CBC) power conversion system developed at Sandia National Laboratory (SNL). A preliminary model of the CBC was developed in HYSYS for comparison. Temperature and pressure ratio curves for the Capstone turbine and compressor developed at SNL were implemented into the HYSYS model. A comparison between the HYSYS model and SNL loop demonstrated power output predicted by HYSYS was much larger than that in the experiment. This was due to a lack of a model for the electrical alternator which was used to measure the power from the SNL loop. Further comparisons of the HYSYS model and the CBC data are recommended. Engineering analyses were performed for several configurations of the intermediate heat transport loop that transfers heat from the nuclear reactor to the hydrogen production plant. The analyses evaluated parallel and concentric piping arrangements and two different working fluids, including helium and a liquid salt. The thermal-hydraulic analyses determined the size and insulation requirements for the hot and cold leg pipes in the different configurations. Economic analyses were performed to estimate the cost of the va« less

  3. Prisons and Primary Schools: Using CHAT to Analyse the Relationship between Developing Identity, Developing Musicianship and Transformative Processes

    ERIC Educational Resources Information Center

    Henley, Jennie

    2015-01-01

    This paper draws on three different research projects to demonstrate the use of an expanded model of Cultural Historical Activity Theory (CHAT), developed as part of a doctoral research study. The first project is an evaluation of the impacts of a Music Partnership Project within Primary and Secondary schools. The second project is an evaluation…

  4. Monitoring Retroviral RNA Dimerization In Vivo via Hammerhead Ribozyme Cleavage

    PubMed Central

    Pal, Bijay K.; Scherer, Lisa; Zelby, Laurie; Bertrand, Edouard; Rossi, John J.

    1998-01-01

    We have used a strategy for colocalization of Psi (Ψ)-tethered ribozymes and targets to demonstrate that Ψ sequences are capable of specific interaction in the cytoplasm of both packaging and nonpackaging cells. These results indicate that current in vitro dimerization models may have in vivo counterparts. The methodology used may be applied to further genetic analyses on Ψ domain interactions in vivo. PMID:9733882

  5. Debris-flow mobilization from landslides

    USGS Publications Warehouse

    Iverson, R.M.; Reid, M.E.; LaHusen, R.G.

    1997-01-01

    Field observations, laboratory experiments, and theoretical analyses indicate that landslides mobilize to form debris flows by three processes: (a) widespread Coulomb failure within a sloping soil, rock, or sediment mass, (b) partial or complete liquefaction of the mass by high pore-fluid pressures, and (c) conversion of landslide translational energy to internal vibrational energy (i.e. granular temperature). These processes can operate independently, but in many circumstances they appear to operate simultaneously and synergistically. Early work on debris-flow mobilization described a similar interplay of processes but relied on mechanical models in which debris behavior was assumed to be fixed and governed by a Bingham or Bagnold rheology. In contrast, this review emphasizes models in which debris behavior evolves in response to changing pore pressures and granular temperatures. One-dimensional infinite-slope models provide insight by quantifying how pore pressures and granular temperatures can influence the transition from Coulomb failure to liquefaction. Analyses of multidimensional experiments reveal complications ignored in one-dimensional models and demonstrate that debris-flow mobilization may occur by at least two distinct modes in the field.

  6. An integrated approach utilising chemometrics and GC/MS for classification of chamomile flowers, essential oils and commercial products.

    PubMed

    Wang, Mei; Avula, Bharathi; Wang, Yan-Hong; Zhao, Jianping; Avonto, Cristina; Parcher, Jon F; Raman, Vijayasankar; Zweigenbaum, Jerry A; Wylie, Philip L; Khan, Ikhlas A

    2014-01-01

    As part of an ongoing research program on authentication, safety and biological evaluation of phytochemicals and dietary supplements, an in-depth chemical investigation of different types of chamomile was performed. A collection of chamomile samples including authenticated plants, commercial products and essential oils was analysed by GC/MS. Twenty-seven authenticated plant samples representing three types of chamomile, viz. German chamomile, Roman chamomile and Juhua were analysed. This set of data was employed to construct a sample class prediction (SCP) model based on stepwise reduction of data dimensionality followed by principle component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). The model was cross-validated with samples including authenticated plants and commercial products. The model demonstrated 100.0% accuracy for both recognition and prediction abilities. In addition, 35 commercial products and 11 essential oils purported to contain chamomile were subsequently predicted by the validated PLS-DA model. Furthermore, tentative identification of the marker compounds correlated with different types of chamomile was explored. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Geographical origin discrimination of lentils (Lens culinaris Medik.) using 1H NMR fingerprinting and multivariate statistical analyses.

    PubMed

    Longobardi, Francesco; Innamorato, Valentina; Di Gioia, Annalisa; Ventrella, Andrea; Lippolis, Vincenzo; Logrieco, Antonio F; Catucci, Lucia; Agostiano, Angela

    2017-12-15

    Lentil samples coming from two different countries, i.e. Italy and Canada, were analysed using untargeted 1 H NMR fingerprinting in combination with chemometrics in order to build models able to classify them according to their geographical origin. For such aim, Soft Independent Modelling of Class Analogy (SIMCA), k-Nearest Neighbor (k-NN), Principal Component Analysis followed by Linear Discriminant Analysis (PCA-LDA) and Partial Least Squares-Discriminant Analysis (PLS-DA) were applied to the NMR data and the results were compared. The best combination of average recognition (100%) and cross-validation prediction abilities (96.7%) was obtained for the PCA-LDA. All the statistical models were validated both by using a test set and by carrying out a Monte Carlo Cross Validation: the obtained performances were found to be satisfying for all the models, with prediction abilities higher than 95% demonstrating the suitability of the developed methods. Finally, the metabolites that mostly contributed to the lentil discrimination were indicated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Systems biology approach to late-onset Alzheimer's disease genome-wide association study identifies novel candidate genes validated using brain expression data and Caenorhabditis elegans experiments.

    PubMed

    Mukherjee, Shubhabrata; Russell, Joshua C; Carr, Daniel T; Burgess, Jeremy D; Allen, Mariet; Serie, Daniel J; Boehme, Kevin L; Kauwe, John S K; Naj, Adam C; Fardo, David W; Dickson, Dennis W; Montine, Thomas J; Ertekin-Taner, Nilufer; Kaeberlein, Matt R; Crane, Paul K

    2017-10-01

    We sought to determine whether a systems biology approach may identify novel late-onset Alzheimer's disease (LOAD) loci. We performed gene-wide association analyses and integrated results with human protein-protein interaction data using network analyses. We performed functional validation on novel genes using a transgenic Caenorhabditis elegans Aβ proteotoxicity model and evaluated novel genes using brain expression data from people with LOAD and other neurodegenerative conditions. We identified 13 novel candidate LOAD genes outside chromosome 19. Of those, RNA interference knockdowns of the C. elegans orthologs of UBC, NDUFS3, EGR1, and ATP5H were associated with Aβ toxicity, and NDUFS3, SLC25A11, ATP5H, and APP were differentially expressed in the temporal cortex. Network analyses identified novel LOAD candidate genes. We demonstrated a functional role for four of these in a C. elegans model and found enrichment of differentially expressed genes in the temporal cortex. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  9. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  10. Cosmological reconstruction and Om diagnostic analysis of Einstein-Aether theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqua, Antonio; Chattopadhyay, Surajit; Momeni, Davood

    In this paper, we analyze the cosmological models in Einstein-Aether gravity, which is a modified theory of gravity in which a time-like vector field breaks the Lorentz symmetry. We use this formalism to analyse different cosmological models with different behavior of the scale factor. In this analysis, we use a certain functional dependence of the Dark Energy (DE) on the Hubble parameter H . It will be demonstrated that the Aether vector field has a non-trivial effect on these cosmological models. We also perform the Om diagnostic in Einstein-Aether gravity and we fit the parameters of the cosmological models usingmore » recent observational data.« less

  11. A multilevel modelling approach to analysis of patient costs under managed care.

    PubMed

    Carey, K

    2000-07-01

    The growth of the managed care model of health care delivery in the USA has led to broadened interest in the performance of health care providers. This paper uses multilevel modelling to analyse the effects of managed care penetration on patient level costs for a sample of 24 medical centres operated by the Veterans Health Administration (VHA). The appropriateness of a two level approach to this problem over ordinary least squares (OLS) is demonstrated. Results indicate a modicum of difference in institutions' performance after controlling for patient effects. Facilities more heavily penetrated by the managed care model may be more effective at controlling costs of their sicker patients. Copyright 2000 John Wiley & Sons, Ltd.

  12. 76 FR 24831 - Site-Specific Analyses for Demonstrating Compliance With Subpart C Performance Objectives

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-03

    ... available under ADAMS accession number ML111040419, and the ``Technical Analysis Supporting Definition of... NUCLEAR REGULATORY COMMISSION 10 CFR Part 61 RIN 3150-AI92 [NRC-2011-0012] Site-Specific Analyses...-level radioactive waste disposal facilities to conduct site-specific analyses to demonstrate compliance...

  13. "Should I or shouldn't I?" Imitation of undesired versus allowed actions from peer and adult models by 18- and 24-month-old toddlers.

    PubMed

    Seehagen, Sabine; Schneider, Silvia; Miebach, Kristin; Frigge, Katharina; Zmyj, Norbert

    2017-11-01

    Imitation is a common way of acquiring novel behaviors in toddlers. However, little is known about toddlers' imitation of undesired actions. Here we investigated 18- and 24-month-olds' (N=110) imitation of undesired and allowed actions from televised peer and adult models. Permissiveness of the demonstrated actions was indicated by the experimenter's response to their execution (angry or neutral). Analyses revealed that toddlers' imitation scores were higher after demonstrations of allowed versus undesired actions, regardless of the age of the model. In agreement with prior research, these results suggest that third-party reactions to a model's actions can be a powerful cue for toddlers to engage in or refrain from imitation. In the context of the present study, third-party reactions were more influential on imitation than the model's age. Considering the relative influence of different social cues for imitation can help to gain a fuller understanding of early observational learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Phenotypic outcomes in Mouse and Human Foxc1 dependent Dandy-Walker cerebellar malformation suggest shared mechanisms.

    PubMed

    Haldipur, Parthiv; Dang, Derek; Aldinger, Kimberly A; Janson, Olivia K; Guimiot, Fabien; Adle-Biasette, Homa; Dobyns, William B; Siebert, Joseph R; Russo, Rosa; Millen, Kathleen J

    2017-01-16

    FOXC1 loss contributes to Dandy-Walker malformation (DWM), a common human cerebellar malformation. Previously, we found that complete Foxc1 loss leads to aberrations in proliferation, neuronal differentiation and migration in the embryonic mouse cerebellum (Haldipur et al., 2014). We now demonstrate that hypomorphic Foxc1 mutant mice have granule and Purkinje cell abnormalities causing subsequent disruptions in postnatal cerebellar foliation and lamination. Particularly striking is the presence of a partially formed posterior lobule which echoes the posterior vermis DW 'tail sign' observed in human imaging studies. Lineage tracing experiments in Foxc1 mutant mouse cerebella indicate that aberrant migration of granule cell progenitors destined to form the posterior-most lobule causes this unique phenotype. Analyses of rare human del chr 6p25 fetal cerebella demonstrate extensive phenotypic overlap with our Foxc1 mutant mouse models, validating our DWM models and demonstrating that many key mechanisms controlling cerebellar development are likely conserved between mouse and human.

  15. Receding horizon online optimization for torque control of gasoline engines.

    PubMed

    Kang, Mingxin; Shen, Tielong

    2016-11-01

    This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Micromechanics based simulation of ductile fracture in structural steels

    NASA Astrophysics Data System (ADS)

    Yellavajjala, Ravi Kiran

    The broader aim of this research is to develop fundamental understanding of ductile fracture process in structural steels, propose robust computational models to quantify the associated damage, and provide numerical tools to simplify the implementation of these computational models into general finite element framework. Mechanical testing on different geometries of test specimens made of ASTM A992 steels is conducted to experimentally characterize the ductile fracture at different stress states under monotonic and ultra-low cycle fatigue (ULCF) loading. Scanning electron microscopy studies of the fractured surfaces is conducted to decipher the underlying microscopic damage mechanisms that cause fracture in ASTM A992 steels. Detailed micromechanical analyses for monotonic and cyclic loading are conducted to understand the influence of stress triaxiality and Lode parameter on the void growth phase of ductile fracture. Based on monotonic analyses, an uncoupled micromechanical void growth model is proposed to predict ductile fracture. This model is then incorporated in to finite element program as a weakly coupled model to simulate the loss of load carrying capacity in the post microvoid coalescence regime for high triaxialities. Based on the cyclic analyses, an uncoupled micromechanics based cyclic void growth model is developed to predict the ULCF life of ASTM A992 steels subjected to high stress triaxialities. Furthermore, a computational fracture locus for ASTM A992 steels is developed and incorporated in to finite element program as an uncoupled ductile fracture model. This model can be used to predict the ductile fracture initiation under monotonic loading in a wide range of triaxiality and Lode parameters. Finally, a coupled microvoid elongation and dilation based continuum damage model is proposed, implemented, calibrated and validated. This model is capable of simulating the local softening caused by the various phases of ductile fracture process under monotonic loading for a wide range of stress states. Novel differentiation procedures based on complex analyses along with existing finite difference methods and automatic differentiation are extended using perturbation techniques to evaluate tensor derivatives. These tensor differentiation techniques are then used to automate nonlinear constitutive models into implicit finite element framework. Finally, the efficiency of these automation procedures is demonstrated using benchmark problems.

  17. Identifying drivers of divergent methane fluxes from restored wetlands

    NASA Astrophysics Data System (ADS)

    Chamberlain, S.; Silver, W. L.; Anthony, T.; Hemes, K. S.; Oikawa, P.; Sturtevant, C.; Eichelmann, E.; Matthes, J. H.; Verfaillie, J. G.; Baldocchi, D. D.

    2017-12-01

    Restored wetlands in the Sacramento-San Joaquin Delta region of California are created, and actively managed, to reduce land subsidence and greenhouse gas (GHG) emissions from drained peatland agriculture. While these wetlands tend to be carbon sinks on a year-to-year basis, variation in methane (CH4) emissions determines whether sites are GHG sources or sinks. Two probable sources of CH4 flux variation across restored wetlands are soil carbon and iron content. These soil properties vary across the region and are a legacy of pre-drainage wetland geomorphology, where alluvium wetlands were mineral rich and carbon poor compared to adjacent peat accumulating sites. We explored drivers of CH4 flux variation from three restored wetlands using eddy covariance, data-driven analyses, and biogeochemical modeling to evaluate alternative hypotheses for observed flux differences. We observed significantly reduced annual CH4 fluxes from wetlands restored on alluvium soils compared to peat sites, and these differences were largest immediately following restoration and gradually reduced over the following three years. Model-based hypothesis testing demonstrates that long-term inhibition of methanogenesis by iron presence is the best explanation for these observations. Soil samplings conducted after four years of continuous inundation demonstrates significantly higher iron concentrations in the alluvium soils, of which 25-30% was in oxidized form capable of inhibiting CH4 production. Using information theory and wavelet analyses, we also demonstrate that CH4 fluxes from the alluvium wetland were decoupled from plant photosynthesis and transport at diel to multiday timescales, as expected when iron reduction inhibits rhizosphere methanogenesis. These findings demonstrate that iron presence is capable of attenuating ecosystem-scale wetland CH4 fluxes, and provide a basis for choosing future wetland restoration sites to minimize CH4 emissions.

  18. Modeling the Gulf Stream System: How Far from Reality?

    NASA Technical Reports Server (NTRS)

    Choa, Yi; Gangopadhyay, Avijit; Bryan, Frank O.; Holland, William R.

    1996-01-01

    Analyses of a primitive equation ocean model simulation of the Atlantic Ocean circulation at 1/6 deg horizontal resolution are presented with a focus on the Gulf Stream region. Among many successful features of this simulation, this letter describes the Gulf Stream separation from the coast of North America near Cape Hatteras, meandering of the Gulf Stream between Cape Hatteras and the Grand Banks, and the vertical structure of temperature and velocity associated with the Gulf Stream. These results demonstrate significant improvement in modeling the Gulf Stream system using basin- to global scale ocean general circulation models. Possible reasons responsible for the realistic Gulf Stream simulation are discussed, contrasting the major differences between the present model configuration and those of previous eddy resolving studies.

  19. Regression Model for Light Weight and Crashworthiness Enhancement Design of Automotive Parts in Frontal CAR Crash

    NASA Astrophysics Data System (ADS)

    Bae, Gihyun; Huh, Hoon; Park, Sungho

    This paper deals with a regression model for light weight and crashworthiness enhancement design of automotive parts in frontal car crash. The ULSAB-AVC model is employed for the crash analysis and effective parts are selected based on the amount of energy absorption during the crash behavior. Finite element analyses are carried out for designated design cases in order to investigate the crashworthiness and weight according to the material and thickness of main energy absorption parts. Based on simulations results, a regression analysis is performed to construct a regression model utilized for light weight and crashworthiness enhancement design of automotive parts. An example for weight reduction of main energy absorption parts demonstrates the validity of a regression model constructed.

  20. The comparative hydrodynamics of rapid rotation by predatory appendages.

    PubMed

    McHenry, M J; Anderson, P S L; Van Wassenbergh, S; Matthews, D G; Summers, A P; Patek, S N

    2016-11-01

    Countless aquatic animals rotate appendages through the water, yet fluid forces are typically modeled with translational motion. To elucidate the hydrodynamics of rotation, we analyzed the raptorial appendages of mantis shrimp (Stomatopoda) using a combination of flume experiments, mathematical modeling and phylogenetic comparative analyses. We found that computationally efficient blade-element models offered an accurate first-order approximation of drag, when compared with a more elaborate computational fluid-dynamic model. Taking advantage of this efficiency, we compared the hydrodynamics of the raptorial appendage in different species, including a newly measured spearing species, Coronis scolopendra The ultrafast appendages of a smasher species (Odontodactylus scyllarus) were an order of magnitude smaller, yet experienced values of drag-induced torque similar to those of a spearing species (Lysiosquillina maculata). The dactyl, a stabbing segment that can be opened at the distal end of the appendage, generated substantial additional drag in the smasher, but not in the spearer, which uses the segment to capture evasive prey. Phylogenetic comparative analyses revealed that larger mantis shrimp species strike more slowly, regardless of whether they smash or spear their prey. In summary, drag was minimally affected by shape, whereas size, speed and dactyl orientation dominated and differentiated the hydrodynamic forces across species and sizes. This study demonstrates the utility of simple mathematical modeling for comparative analyses and illustrates the multi-faceted consequences of drag during the evolutionary diversification of rotating appendages. © 2016. Published by The Company of Biologists Ltd.

  1. Teen Dating Violence, Sexual Harassment, and Bullying Among Middle School Youth: Examining Measurement Invariance by Gender.

    PubMed

    Cutbush, Stacey; Williams, Jason

    2016-12-01

    This study investigated measurement invariance by gender among commonly used teen dating violence (TDV), sexual harassment, and bullying measures. Data were collected from one cohort of seventh-grade middle school students (N = 754) from four schools. Using structural equation modeling, exploratory and confirmatory factor analyses assessed measurement models and tested measurement invariance by gender for aggression measures. Analyses invoked baseline data only. Physical and psychological TDV perpetration measures achieved strict measurement invariance, while bullying perpetration demonstrated partial strict invariance. Electronic TDV and sexual harassment perpetration achieved metric/scalar invariance. Study findings lend validation to prior and future studies using these measures with similar populations. Future research should increase attention to measurement development, refinement, and testing among study measures. © 2016 The Authors. Journal of Research on Adolescence © 2016 Society for Research on Adolescence.

  2. An Approximate Dissipation Function for Large Strain Rubber Thermo-Mechanical Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2003-01-01

    Mechanically induced viscoelastic dissipation is difficult to compute. When the constitutive model is defined by history integrals, the formula for dissipation is a double convolution integral. Since double convolution integrals are difficult to approximate, coupled thermo-mechanical analyses of highly viscous rubber-like materials cannot be made with most commercial finite element software. In this study, we present a method to approximate the dissipation for history integral constitutive models that represent Maxwell-like materials without approximating the double convolution integral. The method requires that the total stress can be separated into elastic and viscous components, and that the relaxation form of the constitutive law is defined with a Prony series. Numerical data is provided to demonstrate the limitations of this approximate method for determining dissipation. Rubber cylinders with imbedded steel disks and with an imbedded steel ball are dynamically loaded, and the nonuniform heating within the cylinders is computed.

  3. Psychometric properties of the Liebowitz Social Anxiety Scale (LSAS) in a longitudinal study of African Americans with anxiety disorders.

    PubMed

    Beard, Courtney; Rodriguez, Benjamin F; Moitra, Ethan; Sibrava, Nicholas J; Bjornsson, Andri; Weisberg, Risa B; Keller, Martin B

    2011-06-01

    The Liebowitz Social Anxiety Scale (LSAS) is a widely used measure of social anxiety. However, no study has examined the psychometric properties of the LSAS in an African American sample. The current study examined the LSAS characteristics in 97 African Americans diagnosed with an anxiety disorder. Overall, the original LSAS subscales showed excellent internal consistency and temporal stability. Similar to previous reports, fear and avoidance subscales were so highly correlated that they yielded redundant information. Confirmatory factor analyses for three previously proposed models failed to demonstrate an excellent fit to our data. However, a four-factor model showed minimally acceptable fit. Overall, the LSAS performed similarly in our African American sample as in previous European American samples. Exploratory factor analyses are warranted to determine whether a better factor structure exists for African Americans. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Mortality rates in OECD countries converged during the period 1990-2010.

    PubMed

    Bremberg, Sven G

    2017-06-01

    Since the scientific revolution of the 18th century, human health has gradually improved, but there is no unifying theory that explains this improvement in health. Studies of macrodeterminants have produced conflicting results. Most studies have analysed health at a given point in time as the outcome; however, the rate of improvement in health might be a more appropriate outcome. Twenty-eight OECD member countries were selected for analysis in the period 1990-2010. The main outcomes studied, in six age groups, were the national rates of decrease in mortality in the period 1990-2010. The effects of seven potential determinants on the rates of decrease in mortality were analysed in linear multiple regression models using least squares, controlling for country-specific history constants, which represent the mortality rate in 1990. The multiple regression analyses started with models that only included mortality rates in 1990 as determinants. These models explained 87% of the intercountry variation in the children aged 1-4 years and 51% in adults aged 55-74 years. When added to the regression equations, the seven determinants did not seem to significantly increase the explanatory power of the equations. The analyses indicated a decrease in mortality in all nations and in all age groups. The development of mortality rates in the different nations demonstrated significant catch-up effects. Therefore an important objective of the national public health sector seems to be to reduce the delay between international research findings and the universal implementation of relevant innovations.

  5. Low Social Status Markers: Do They Predict Depressive Symptoms in Adolescence?

    PubMed

    Jackson, Benita; Goodman, Elizabeth

    2011-07-01

    Some markers of social disadvantage are associated robustly with depressive symptoms among adolescents: female gender and lower socioeconomic status (SES), respectively. Others are associated equivocally, notably Black v. White race/ethnicity. Few studies examine whether markers of social disadvantage by gender, SES, and race/ethnicity jointly predict self-reported depressive symptoms during adolescence; this was our goal. Secondary analyses were conducted on data from a socioeconomically diverse community-based cohort study of non-Hispanic Black and White adolescents (N = 1,263, 50.4% female). Multivariable general linear models tested if female gender, Black race/ethnicity, and lower SES (assessed by parent education and household income), and their interactions predicted greater depressive symptoms reported on the Center for Epidemiological Studies-Depression scale. Models adjusted for age and pubertal status. Univariate analyses revealed more depressive symptoms in females, Blacks, and participants with lower SES. Multivariable models showed females across both racial/ethnic groups reported greater depressive symptoms; Blacks demonstrated more depressive symptoms than did Whites but when SES was included this association disappeared. Exploratory analyses suggested Blacks gained less mental health benefit from increased SES. However there were no statistically significant interactions among gender, race/ethnicity, or SES. Taken together, we conclude that complex patterning among low social status domains within gender, race/ethnicity, and SES predicts depressive symptoms among adolescents.

  6. Reciprocal Markov Modeling of Feedback Mechanisms Between Emotion and Dietary Choice Using Experience-Sampling Data.

    PubMed

    Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H

    2015-01-01

    With intensively collected longitudinal data, recent advances in the experience-sampling method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal the relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet, & Dube, 2011) that observed 160 participants' food consumption and momentary emotions 6 times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal-healthiness decision, the proposed reciprocal Markov model (RMM) can accommodate both hidden ("general" emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent with the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters.

  7. Time Reversal Method for Pipe Inspection with Guided Wave

    NASA Astrophysics Data System (ADS)

    Deng, Fei; He, Cunfu; Wu, Bin

    2008-02-01

    The temporal-spatial focusing effect of the time reversal method on the guided wave inspection in pipes is investigated. A steel pipe model with outer diameter of 70 mm and wall thickness of 3.5 mm is numerically built to analyse the reflection coefficient of L(0,2) mode when the time reversal method is applied in the model. According to the calculated results, it is shown that a synthetic time reversal array method is effective to improve the signal-to-noise ratio of a guided wave inspection system. As an intercepting window is widened, more energy can be included in a re-emitted signal, which leads to a large reflection coefficient of L(0,2) mode. It is also shown that when a time reversed signal is reapplied in the pipe model, by analysing the motion of the time reversed wave propagating along the pipe model, a defect can be identified. Therefore, it is demonstrated that the time reversal method can be used to locate the circumferential position of a defect in a pipe. Finally, through an experiment corresponding with the pipe model, the experimental result shows that the above-mentioned method can be valid in the inspection of a pipe.

  8. Statistical modelling for recurrent events: an application to sports injuries

    PubMed Central

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  9. Live Imaging-Based Model Selection Reveals Periodic Regulation of the Stochastic G1/S Phase Transition in Vertebrate Axial Development

    PubMed Central

    Kurokawa, Hiroshi; Sakaue-Sawano, Asako; Imamura, Takeshi; Miyawaki, Atsushi; Iimura, Tadahiro

    2014-01-01

    In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1) fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1) notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of the pathophysiological tissue growth mode. PMID:25474567

  10. Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.

    PubMed

    Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D

    2012-01-01

    Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.

  11. A land-use and land-cover modeling strategy to support a national assessment of carbon stocks and fluxes

    USGS Publications Warehouse

    Sohl, Terry L.; Sleeter, Benjamin M.; Zhu, Zhiliang; Sayler, Kristi L.; Bennett, Stacie; Bouchard, Michelle; Reker, Ryan R.; Hawbaker, Todd J.; Wein, Anne M.; Liu, Shuguang; Kanengieter, Ronald L.; Acevedo, William

    2012-01-01

    Changes in land use, land cover, disturbance regimes, and land management have considerable influence on carbon and greenhouse gas (GHG) fluxes within ecosystems. Through targeted land-use and land-management activities, ecosystems can be managed to enhance carbon sequestration and mitigate fluxes of other GHGs. National-scale, comprehensive analyses of carbon sequestration potential by ecosystem are needed, with a consistent, nationally applicable land-use and land-cover (LULC) modeling framework a key component of such analyses. The U.S. Geological Survey has initiated a project to analyze current and projected future GHG fluxes by ecosystem and quantify potential mitigation strategies. We have developed a unique LULC modeling framework to support this work. Downscaled scenarios consistent with IPCC Special Report on Emissions Scenarios (SRES) were constructed for U.S. ecoregions, and the FORE-SCE model was used to spatially map the scenarios. Results for a prototype demonstrate our ability to model LULC change and inform a biogeochemical modeling framework for analysis of subsequent GHG fluxes. The methodology was then successfully used to model LULC change for four IPCC SRES scenarios for an ecoregion in the Great Plains. The scenario-based LULC projections are now being used to analyze potential GHG impacts of LULC change across the U.S.

  12. A land-use and land-cover modeling strategy to support a national assessment of carbon stocks and fluxes

    USGS Publications Warehouse

    Sohl, Terry L.; Sleeter, Benjamin M.; Zhu, Zhi-Liang; Sayler, Kristi L.; Bennett, Stacie; Bouchard, Michelle; Reker, Ryan R.; Hawbaker, Todd; Wein, Anne; Liu, Shu-Guang; Kanengleter, Ronald; Acevedo, William

    2012-01-01

    Changes in land use, land cover, disturbance regimes, and land management have considerable influence on carbon and greenhouse gas (GHG) fluxes within ecosystems. Through targeted land-use and landmanagement activities, ecosystems can be managed to enhance carbon sequestration and mitigate fluxes of other GHGs. National-scale, comprehensive analyses of carbon sequestration potential by ecosystem are needed, with a consistent, nationally applicable land-use and land-cover (LULC) modeling framework a key component of such analyses. The U.S. Geological Survey has initiated a project to analyze current and projected future GHG fluxes by ecosystem and quantify potential mitigation strategies. We have developed a unique LULC modeling framework to support this work. Downscaled scenarios consistent with IPCC Special Report on Emissions Scenarios (SRES) were constructed for U.S. ecoregions, and the FORE-SCE model was used to spatially map the scenarios. Results for a prototype demonstrate our ability to model LULC change and inform a biogeochemical modeling framework for analysis of subsequent GHG fluxes. The methodology was then successfully used to model LULC change for four IPCC SRES scenarios for an ecoregion in the Great Plains. The scenario-based LULC projections are now being used to analyze potential GHG impacts of LULC change across the U.S.

  13. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    PubMed

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Simulating smoke transport from wildland fires with a regional-scale air quality model: sensitivity to spatiotemporal allocation of fire emissions.

    PubMed

    Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T

    2014-09-15

    Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  16. Demonstration of reduced-order urban scale building energy models

    DOE PAGES

    Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...

    2017-09-08

    The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less

  17. Demonstration of reduced-order urban scale building energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew

    The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less

  18. Results and analysis of saltstone cores taken from saltstone disposal unit cell 2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M. M.; Hill, K. A.

    2016-03-01

    As part of an ongoing Performance Assessment (PA) Maintenance Plan, Savannah River Remediation (SRR) has developed a sampling and analyses strategy to facilitate the comparison of field-emplaced samples (i.e., saltstone placed and cured in a Saltstone Disposal Unit (SDU)) with samples prepared and cured in the laboratory. The primary objectives of the Sampling and Analyses Plan (SAP) are; (1) to demonstrate a correlation between the measured properties of laboratory-prepared, simulant samples (termed Sample Set 3), and the field-emplaced saltstone samples (termed Sample Set 9), and (2) to validate property values assumed for the Saltstone Disposal Facility (SDF) PA modeling. Themore » analysis and property data for Sample Set 9 (i.e. six core samples extracted from SDU Cell 2A (SDU2A)) are documented in this report, and where applicable, the results are compared to the results for Sample Set 3. Relevant properties to demonstrate the aforementioned objectives include bulk density, porosity, saturated hydraulic conductivity (SHC), and radionuclide leaching behavior.« less

  19. Dynamic intramolecular regulation of the histone chaperone nucleoplasmin controls histone binding and release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Christopher; Matsui, Tsutomu; Karp, Jerome M.

    Here, nucleoplasmin (Npm) is a highly conserved histone chaperone responsible for the maternal storage and zygotic release of histones H2A/H2B. Npm contains a pentameric N-terminal core domain and an intrinsically disordered C-terminal tail domain. Though intrinsically disordered regions are common among histone chaperones, their roles in histone binding and chaperoning remain unclear. Using an NMR-based approach, here we demonstrate that the Xenopus laevis Npm tail domain controls the binding of histones at its largest acidic stretch (A2) via direct competition with both the C-terminal basic stretch and basic nuclear localization signal. NMR and small-angle X-ray scattering (SAXS) structural analyses allowedmore » us to construct models of both the tail domain and the pentameric complex. Functional analyses demonstrate that these competitive intramolecular interactions negatively regulate Npm histone chaperone activity in vitro. Together these data establish a potentially generalizable mechanism of histone chaperone regulation via dynamic and specific intramolecular shielding of histone interaction sites.« less

  20. Dynamic intramolecular regulation of the histone chaperone nucleoplasmin controls histone binding and release

    DOE PAGES

    Warren, Christopher; Matsui, Tsutomu; Karp, Jerome M.; ...

    2017-12-20

    Here, nucleoplasmin (Npm) is a highly conserved histone chaperone responsible for the maternal storage and zygotic release of histones H2A/H2B. Npm contains a pentameric N-terminal core domain and an intrinsically disordered C-terminal tail domain. Though intrinsically disordered regions are common among histone chaperones, their roles in histone binding and chaperoning remain unclear. Using an NMR-based approach, here we demonstrate that the Xenopus laevis Npm tail domain controls the binding of histones at its largest acidic stretch (A2) via direct competition with both the C-terminal basic stretch and basic nuclear localization signal. NMR and small-angle X-ray scattering (SAXS) structural analyses allowedmore » us to construct models of both the tail domain and the pentameric complex. Functional analyses demonstrate that these competitive intramolecular interactions negatively regulate Npm histone chaperone activity in vitro. Together these data establish a potentially generalizable mechanism of histone chaperone regulation via dynamic and specific intramolecular shielding of histone interaction sites.« less

  1. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-07-01

    We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  2. Integrated Primary Care Readiness and Behaviors Scale: Development and validation in behavioral health professionals.

    PubMed

    Blaney, Cerissa L; Redding, Colleen A; Paiva, Andrea L; Rossi, Joseph S; Prochaska, James O; Blissmer, Bryan; Burditt, Caitlin T; Nash, Justin M; Bayley, Keri Dotson

    2018-03-01

    Although integrated primary care (IPC) is growing, several barriers remain. Better understanding of behavioral health professionals' (BHPs') readiness for and engagement in IPC behaviors could improve IPC research and training. This study developed measures of IPC behaviors and stage of change. The sample included 319 licensed, practicing BHPs with a range of interests and experience with IPC. Sequential measurement development procedures, with split-half cross-validation were conducted. Exploratory principal components analyses (N = 152) and confirmatory factor analyses (N = 167) yielded a 12-item scale with 2 factors: consultation/practice management (CPM) and intervention/knowledge (IK). A higher-order Integrated Primary Care Behavior Scale (IPCBS) model showed good fit to the data, and excellent internal consistencies. The multivariate analysis of variance (MANOVA) on the IPCBS demonstrated significant large-sized differences across stage and behavior groups. The IPCBS demonstrated good psychometric properties and external validation, advancing research, education, and training for IPC practice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Nonlinear vibration of a hemispherical dome under external water pressure

    NASA Astrophysics Data System (ADS)

    Ross, C. T. F.; McLennan, A.; Little, A. P. F.

    2011-07-01

    The aim of this study was to analyse the behaviour of a hemi-spherical dome when vibrated under external water pressure, using the commercial computer package ANSYS 11.0. In order to achieve this aim, the dome was modelled and vibrated in air and then in water, before finally being vibrated under external water pressure. The results collected during each of the analyses were compared to the previous studies, and this demonstrated that ANSYS was a suitable program and produced accurate results for this type of analysis, together with excellent graphical displays. The analysis under external water pressure, clearly demonstrated that as external water pressure was increased, the resonant frequencies decreased and a type of dynamic buckling became likely; because the static buckling eigenmode was similar to the vibration eigenmode. ANSYS compared favourably with the in-house software, but had the advantage that it produced graphical displays. This also led to the identification of previously undetected meridional modes of vibration; which were not detected with the in-house software.

  4. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-04-01

    We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  5. Cost-effectiveness of different strategies to manage patients with sciatica.

    PubMed

    Fitzsimmons, Deborah; Phillips, Ceri J; Bennett, Hayley; Jones, Mari; Williams, Nefyn; Lewis, Ruth; Sutton, Alex; Matar, Hosam E; Din, Nafees; Burton, Kim; Nafees, Sadia; Hendry, Maggie; Rickard, Ian; Wilkinson, Claire

    2014-07-01

    The aim of this paper is to estimate the relative cost-effectiveness of treatment regimens for managing patients with sciatica. A deterministic model structure was constructed based on information from the findings from a systematic review of clinical effectiveness and cost-effectiveness, published sources of unit costs, and expert opinion. The assumption was that patients presenting with sciatica would be managed through one of 3 pathways (primary care, stepped approach, immediate referral to surgery). Results were expressed as incremental cost per patient with symptoms successfully resolved. Analysis also included incremental cost per utility gained over a 12-month period. One-way sensitivity analyses were used to address uncertainty. The model demonstrated that none of the strategies resulted in 100% success. For initial treatments, the most successful regime in the first pathway was nonopioids, with a probability of success of 0.613. In the second pathway, the most successful strategy was nonopioids, followed by biological agents, followed by epidural/nerve block and disk surgery, with a probability of success of 0.996. Pathway 3 (immediate surgery) was not cost-effective. Sensitivity analyses identified that the use of the highest cost estimates results in a similar overall picture. While the estimates of cost per quality-adjusted life year are higher, the economic model demonstrated that stepped approaches based on initial treatment with nonopioids are likely to represent the most cost-effective regimens for the treatment of sciatica. However, development of alternative economic modelling approaches is required. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  6. Comprehensive multi-stage linkage analyses identify a locus for adult height on chromosome 3p in a healthy Caucasian population.

    PubMed

    Ellis, Justine A; Scurrah, Katrina J; Duncan, Anna E; Lamantia, Angela; Byrnes, Graham B; Harrap, Stephen B

    2007-04-01

    There have been a number of genome-wide linkage studies for adult height in recent years. These studies have yielded few well-replicated loci, and none have been further confirmed by the identification of associated gene variants. The inconsistent results may be attributable to the fact that few studies have combined accurate phenotype measures with informative statistical modelling in healthy populations. We have performed a multi-stage genome-wide linkage analysis for height in 275 adult sibling pairs drawn randomly from the Victorian Family Heart Study (VFHS), a healthy population-based Caucasian cohort. Height was carefully measured in a standardised fashion on regularly calibrated equipment. Following genome-wide identification of a peak Z-score of 3.14 on chromosome 3 at 69 cM, we performed a fine-mapping analysis of this region in an extended sample of 392 two-generation families. We used a number of variance components models that incorporated assortative mating and shared environment effects, and we observed a peak LOD score of approximately 3.5 at 78 cM in four of the five models tested. We also demonstrated that the most prevalent model in the literature gave the worst fit, and the lowest LOD score (2.9) demonstrating the importance of appropriate modelling. The region identified in this study replicates the results of other genome-wide scans of height and bone-related phenotypes, strongly suggesting the presence of a gene important in bone growth on chromosome 3p. Association analyses of relevant candidate genes should identify the genetic variants responsible for the chromosome 3p linkage signal in our population.

  7. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  8. Examination of Triacylglycerol Biosynthetic Pathways via De Novo Transcriptomic and Proteomic Analyses in an Unsequenced Microalga

    PubMed Central

    Guarnieri, Michael T.; Nag, Ambarish; Smolinski, Sharon L.; Darzins, Al; Seibert, Michael; Pienkos, Philip T.

    2011-01-01

    Biofuels derived from algal lipids represent an opportunity to dramatically impact the global energy demand for transportation fuels. Systems biology analyses of oleaginous algae could greatly accelerate the commercialization of algal-derived biofuels by elucidating the key components involved in lipid productivity and leading to the initiation of hypothesis-driven strain-improvement strategies. However, higher-level systems biology analyses, such as transcriptomics and proteomics, are highly dependent upon available genomic sequence data, and the lack of these data has hindered the pursuit of such analyses for many oleaginous microalgae. In order to examine the triacylglycerol biosynthetic pathway in the unsequenced oleaginous microalga, Chlorella vulgaris, we have established a strategy with which to bypass the necessity for genomic sequence information by using the transcriptome as a guide. Our results indicate an upregulation of both fatty acid and triacylglycerol biosynthetic machinery under oil-accumulating conditions, and demonstrate the utility of a de novo assembled transcriptome as a search model for proteomic analysis of an unsequenced microalga. PMID:22043295

  9. On an interface of the online system for a stochastic analysis of the varied information flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorshenin, Andrey K.; MIREA, MGUPI; Kuzmin, Victor Yu.

    The article describes a possible approach to the construction of an interface of an online asynchronous system that allows researchers to analyse varied information flows. The implemented stochastic methods are based on the mixture models and the method of moving separation of mixtures. The general ideas of the system functionality are demonstrated on an example for some moments of a finite normal mixture.

  10. Structural basis for alpha fetoprotein-mediated inhibition of caspase-3 activity in hepatocellular carcinoma cells.

    PubMed

    Lin, Bo; Zhu, Mingyue; Wang, Wenting; Li, Wei; Dong, Xu; Chen, Yi; Lu, Yan; Guo, Junli; Li, Mengsen

    2017-10-01

    Alpha-fetoprotein (AFP) is an early serum growth factor in the foetal liver development and hepatic carcinogenesis; However, the precise biological role of cytoplasmic AFP remains elusive. Although we recently demonstrated that cytoplasmic AFP might interact with caspase-3 and inhibit the signal transduction of apoptosis in human hepatocellular carcinoma (HCC) cells, the details of this interaction are not clear. To reveal the molecular relationship between AFP and caspase-3, we performed molecular docking, co-immunoprecipitation (Co-IP), laser confocal microscopy, site-directed mutagenesis and functional experiments to analyse the key amino acid residues in the binding site of caspase-3. The results of Co-IP, laser confocal microscopy and functional analyses were consistent with the computational model. We also used the model to explain why AFP cannot bind to caspase-8. These results provide the molecular basis for the AFP-mediated inhibition of caspase-3 activity in HCC cells. Altogether, we found that AFP interacts with caspase-3 through precise amino acids, namely loop-4 residues Glu-248, Asp-253 and His-257. The results further demonstrated that AFP plays a critical role in the inhibition of the apoptotic signal transduction that mediated by caspase-3. Thus, AFP might represent a novel biotarget for the therapy of HCC patients. © 2017 UICC.

  11. Dispositional mindfulness moderates the effects of stress among adolescents: rumination as a mediator.

    PubMed

    Ciesla, Jeffrey A; Reilly, Laura C; Dickson, Kelsey S; Emanuel, Amber S; Updegraff, John A

    2012-01-01

    Recent research has demonstrated that higher levels of mindfulness are associated with greater psychological and physical health. However, the majority of this research has been conducted with adults; research is only beginning to examine the effects of mindfulness among adolescents. Further, research into adolescent mindfulness has typically conceptualized mindfulness as a unidimensional phenomenon and has not yet examined multidimensional models of mindfulness that have emerged in the adult literature. Further, the mechanisms through which mindfulness influences these outcomes are presently unclear. The present study examined the effects of three facets of mindfulness among adolescents. Seventy-eight adolescents (61% female, 94% Caucasian, M age = 16) completed a measure of dispositional mindfulness at baseline. Participants then completed measures of daily stress, dysphoric affect, and state rumination over a 7-day period. Multilevel modeling analyses revealed that facets of mindfulness (i.e., nonreactivity and nonjudgment) were associated with lower levels of dysphoric mood. Mindfulness interacted with daily stress to predict later dysphoria; less mindful individuals were particularly vulnerable to the negative effects of stress. Finally, analyses demonstrated that the effect of the Mindfulness × Stress Moderation was significantly mediated by increases in daily rumination. These findings support the importance of mindfulness among adolescents and help to elucidate the mechanisms through which mindfulness influences psychological health.

  12. Health research needs more comprehensive accessibility measures: integrating time and transport modes from open data.

    PubMed

    Tenkanen, Henrikki; Saarsalmi, Perttu; Järv, Olle; Salonen, Maria; Toivonen, Tuuli

    2016-07-28

    In this paper, we demonstrate why and how both temporality and multimodality should be integrated in health related studies that include accessibility perspective, in this case healthy food accessibility. We provide evidence regarding the importance of using multimodal spatio-temporal accessibility measures when conducting research in urban contexts and propose a methodological approach for integrating different travel modes and temporality to spatial accessibility analyses. We use the Helsinki metropolitan area (Finland) as our case study region to demonstrate the effects of temporality and modality on the results. Spatial analyses were carried out on 250 m statistical grid squares. We measured travel times between the home location of inhabitants and open grocery stores providing healthy food at 5 p.m., 10 p.m., and 1 a.m. using public transportation and private cars. We applied the so-called door-to-door approach for the travel time measurements to obtain more realistic and comparable results between travel modes. The analyses are based on open access data and publicly available open-source tools, thus similar analyses can be conducted in urban regions worldwide. Our results show that both time and mode of transport have a prominent impact on the outcome of the analyses; thus, understanding the realities of accessibility in a city may be very different according to the setting of the analysis used. In terms of travel time, there is clear variation in the results at different times of the day. In terms of travel mode, our results show that when analyzed in a comparable manner, public transport can be an even faster mode than a private car to access healthy food, especially in central areas of the city where the service network is dense and public transportation system is effective. This study demonstrates that time and transport modes are essential components when modeling health-related accessibility in urban environments. Neglecting them from spatial analyses may lead to overly simplified or even erroneous images of the realities of accessibility. Hence, there is a risk that health related planning and decisions based on simplistic accessibility measures might cause unwanted outcomes in terms of inequality among different groups of people.

  13. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  14. Predictors of persistent pain after total knee arthroplasty: a systematic review and meta-analysis.

    PubMed

    Lewis, G N; Rice, D A; McNair, P J; Kluger, M

    2015-04-01

    Several studies have identified clinical, psychosocial, patient characteristic, and perioperative variables that are associated with persistent postsurgical pain; however, the relative effect of these variables has yet to be quantified. The aim of the study was to provide a systematic review and meta-analysis of predictor variables associated with persistent pain after total knee arthroplasty (TKA). Included studies were required to measure predictor variables prior to or at the time of surgery, include a pain outcome measure at least 3 months post-TKA, and include a statistical analysis of the effect of the predictor variable(s) on the outcome measure. Counts were undertaken of the number of times each predictor was analysed and the number of times it was found to have a significant relationship with persistent pain. Separate meta-analyses were performed to determine the effect size of each predictor on persistent pain. Outcomes from studies implementing uni- and multivariable statistical models were analysed separately. Thirty-two studies involving almost 30 000 patients were included in the review. Preoperative pain was the predictor that most commonly demonstrated a significant relationship with persistent pain across uni- and multivariable analyses. In the meta-analyses of data from univariate models, the largest effect sizes were found for: other pain sites, catastrophizing, and depression. For data from multivariate models, significant effects were evident for: catastrophizing, preoperative pain, mental health, and comorbidities. Catastrophizing, mental health, preoperative knee pain, and pain at other sites are the strongest independent predictors of persistent pain after TKA. © The Author 2014. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Identifying food deserts and swamps based on relative healthy food access: a spatio-temporal Bayesian approach.

    PubMed

    Luan, Hui; Law, Jane; Quick, Matthew

    2015-12-30

    Obesity and other adverse health outcomes are influenced by individual- and neighbourhood-scale risk factors, including the food environment. At the small-area scale, past research has analysed spatial patterns of food environments for one time period, overlooking how food environments change over time. Further, past research has infrequently analysed relative healthy food access (RHFA), a measure that is more representative of food purchasing and consumption behaviours than absolute outlet density. This research applies a Bayesian hierarchical model to analyse the spatio-temporal patterns of RHFA in the Region of Waterloo, Canada, from 2011 to 2014 at the small-area level. RHFA is calculated as the proportion of healthy food outlets (healthy outlets/healthy + unhealthy outlets) within 4-km from each small-area. This model measures spatial autocorrelation of RHFA, temporal trend of RHFA for the study region, and spatio-temporal trends of RHFA for small-areas. For the study region, a significant decreasing trend in RHFA is observed (-0.024), suggesting that food swamps have become more prevalent during the study period. For small-areas, significant decreasing temporal trends in RHFA were observed for all small-areas. Specific small-areas located in south Waterloo, north Kitchener, and southeast Cambridge exhibited the steepest decreasing spatio-temporal trends and are classified as spatio-temporal food swamps. This research demonstrates a Bayesian spatio-temporal modelling approach to analyse RHFA at the small-area scale. Results suggest that food swamps are more prevalent than food deserts in the Region of Waterloo. Analysing spatio-temporal trends of RHFA improves understanding of local food environment, highlighting specific small-areas where policies should be targeted to increase RHFA and reduce risk factors of adverse health outcomes such as obesity.

  16. Linear control theory for gene network modeling.

    PubMed

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  17. A fractal growth model: Exploring the connection pattern of hubs in complex networks

    NASA Astrophysics Data System (ADS)

    Li, Dongyan; Wang, Xingyuan; Huang, Penghe

    2017-04-01

    Fractal is ubiquitous in many real-world networks. Previous researches showed that the strong disassortativity between the hub-nodes on all length scales was the key principle that gave rise to the fractal architecture of networks. Although fractal property emerged in some models, there were few researches about the fractal growth model and quantitative analyses about the strength of the disassortativity for fractal model. In this paper, we proposed a novel inverse renormalization method, named Box-based Preferential Attachment (BPA), to build the fractal growth models in which the Preferential Attachment was performed at box level. The proposed models provided a new framework that demonstrated small-world-fractal transition. Also, we firstly demonstrated the statistical characteristic of connection patterns of the hubs in fractal networks. The experimental results showed that, given proper growing scale and added edges, the proposed models could clearly show pure small-world or pure fractal or both of them. It also showed that the hub connection ratio showed normal distribution in many real-world networks. At last, the comparisons of connection pattern between the proposed models and the biological and technical networks were performed. The results gave useful reference for exploring the growth principle and for modeling the connection patterns for real-world networks.

  18. Boltzmann Energy-based Image Analysis Demonstrates that Extracellular Domain Size Differences Explain Protein Segregation at Immune Synapses

    PubMed Central

    Burroughs, Nigel J.; Köhler, Karsten; Miloserdov, Vladimir; Dustin, Michael L.; van der Merwe, P. Anton; Davis, Daniel M.

    2011-01-01

    Immune synapses formed by T and NK cells both show segregation of the integrin ICAM1 from other proteins such as CD2 (T cell) or KIR (NK cell). However, the mechanism by which these proteins segregate remains unclear; one key hypothesis is a redistribution based on protein size. Simulations of this mechanism qualitatively reproduce observed segregation patterns, but only in certain parameter regimes. Verifying that these parameter constraints in fact hold has not been possible to date, this requiring a quantitative coupling of theory to experimental data. Here, we address this challenge, developing a new methodology for analysing and quantifying image data and its integration with biophysical models. Specifically we fit a binding kinetics model to 2 colour fluorescence data for cytoskeleton independent synapses (2 and 3D) and test whether the observed inverse correlation between fluorophores conforms to size dependent exclusion, and further, whether patterned states are predicted when model parameters are estimated on individual synapses. All synapses analysed satisfy these conditions demonstrating that the mechanisms of protein redistribution have identifiable signatures in their spatial patterns. We conclude that energy processes implicit in protein size based segregation can drive the patternation observed in individual synapses, at least for the specific examples tested, such that no additional processes need to be invoked. This implies that biophysical processes within the membrane interface have a crucial impact on cell∶cell communication and cell signalling, governing protein interactions and protein aggregation. PMID:21829338

  19. Predicting the size of individual and group differences on speeded cognitive tasks.

    PubMed

    Chen, Jing; Hale, Sandra; Myerson, Joel

    2007-06-01

    An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.

  20. Pediatric patient safety events during hospitalization: approaches to accounting for institution-level effects.

    PubMed

    Slonim, Anthony D; Marcin, James P; Turenne, Wendy; Hall, Matt; Joseph, Jill G

    2007-12-01

    To determine the rates, patient, and institutional characteristics associated with the occurrence of patient safety indicators (PSIs) in hospitalized children and the degree of statistical difference derived from using three approaches of controlling for institution level effects. Pediatric Health Information System Dataset consisting of all pediatric discharges (<21 years of age) from 34 academic, freestanding children's hospitals for calendar year 2003. The rates of PSIs were computed for all discharges. The patient and institutional characteristics associated with these PSIs were calculated. The analyses sequentially applied three increasingly conservative methods to control for the institution-level effects robust standard error estimation, a fixed effects model, and a random effects model. The degree of difference from a "base state," which excluded institution-level variables, and between the models was calculated. The effects of these analyses on the interpretation of the PSIs are presented. PSIs are relatively infrequent events in hospitalized children ranging from 0 per 10,000 (postoperative hip fracture) to 87 per 10,000 (postoperative respiratory failure). Significant variables associated PSIs included age (neonates), race (Caucasians), payor status (public insurance), severity of illness (extreme), and hospital size (>300 beds), which all had higher rates of PSIs than their reference groups in the bivariable logistic regression results. The three different approaches of adjusting for institution-level effects demonstrated that there were similarities in both the clinical and statistical significance across each of the models. Institution-level effects can be appropriately controlled for by using a variety of methods in the analyses of administrative data. Whenever possible, resource-conservative methods should be used in the analyses especially if clinical implications are minimal.

  1. A multi-state model for sick-leave data applied to a randomized control trial study of low back pain.

    PubMed

    Lie, Stein Atle; Eriksen, Hege R; Ursin, Holger; Hagen, Eli Molde

    2008-05-01

    Analysing and presenting data on different outcomes after sick-leave is challenging. The use of extended statistical methods supplies additional information and allows further exploitation of data. Four hundred and fifty-seven patients, sick-listed for 8-12 weeks for low back pain, were randomized to intervention (n=237) or control (n=220). Outcome was measured as: "sick-listed'', "returned to work'', or "disability pension''. The individuals shifted between the three states between one and 22 times (mean 6.4 times). In a multi-state model, shifting between the states was set up in a transition intensity matrix. The probability of being in any of the states was calculated as a transition probability matrix. The effects of the intervention were modelled using a non-parametric model. There was an effect of the intervention for leaving the state sick-listed and shifting to returned to work (relative risk (RR)=1.27, 95% confidence interval (CI) 1.09- 1.47). The nonparametric estimates showed an effect of the intervention for leaving sick-listed and shifting to returned to work in the first 6 months. We found a protective effect of the intervention for shifting back to sick-listed between 6 and 18 months. The analyses showed that the probability of staying in the state returned to work was not different between the intervention and control groups at the end of the follow-up (3 years). We demonstrate that these alternative analyses give additional results and increase the strength of the analyses. The simple intervention did not decrease the probability of being on sick-leave in the long term; however, it decreased the time that individuals were on sick-leave.

  2. Regional coseismic landslide hazard assessment without historical landslide inventories: A new approach

    NASA Astrophysics Data System (ADS)

    Kritikos, Theodosios; Robinson, Tom R.; Davies, Tim R. H.

    2015-04-01

    Currently, regional coseismic landslide hazard analyses require comprehensive historical landslide inventories as well as detailed geotechnical data. Consequently, such analyses have not been possible where these data are not available. A new approach is proposed herein to assess coseismic landslide hazard at regional scale for specific earthquake scenarios in areas without historical landslide inventories. The proposed model employs fuzzy logic and geographic information systems to establish relationships between causative factors and coseismic slope failures in regions with well-documented and substantially complete coseismic landslide inventories. These relationships are then utilized to estimate the relative probability of landslide occurrence in regions with neither historical landslide inventories nor detailed geotechnical data. Statistical analyses of inventories from the 1994 Northridge and 2008 Wenchuan earthquakes reveal that shaking intensity, topography, and distance from active faults and streams are the main controls on the spatial distribution of coseismic landslides. Average fuzzy memberships for each factor are developed and aggregated to model the relative coseismic landslide hazard for both earthquakes. The predictive capabilities of the models are assessed and show good-to-excellent model performance for both events. These memberships are then applied to the 1999 Chi-Chi earthquake, using only a digital elevation model, active fault map, and isoseismal data, replicating prediction of a future event in a region lacking historic inventories and/or geotechnical data. This similarly results in excellent model performance, demonstrating the model's predictive potential and confirming it can be meaningfully applied in regions where previous methods could not. For such regions, this method may enable a greater ability to analyze coseismic landslide hazard from specific earthquake scenarios, allowing for mitigation measures and emergency response plans to be better informed of earthquake-related hazards.

  3. Introduction to bifactor polytomous item response theory analysis.

    PubMed

    Toland, Michael D; Sulis, Isabella; Giambona, Francesca; Porcu, Mariano; Campbell, Jonathan M

    2017-02-01

    A bifactor item response theory model can be used to aid in the interpretation of the dimensionality of a multifaceted questionnaire that assumes continuous latent variables underlying the propensity to respond to items. This model can be used to describe the locations of people on a general continuous latent variable as well as on continuous orthogonal specific traits that characterize responses to groups of items. The bifactor graded response (bifac-GR) model is presented in contrast to a correlated traits (or multidimensional GR model) and unidimensional GR model. Bifac-GR model specification, assumptions, estimation, and interpretation are demonstrated with a reanalysis of data (Campbell, 2008) on the Shared Activities Questionnaire. We also show the importance of marginalizing the slopes for interpretation purposes and we extend the concept to the interpretation of the information function. To go along with the illustrative example analyses, we have made available supplementary files that include command file (syntax) examples and outputs from flexMIRT, IRTPRO, R, Mplus, and STATA. Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.jsp.2016.11.001. Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  4. Application of Diagnostic Analysis Tools to the Ares I Thrust Vector Control System

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Melcher, Kevin J.; Chicatelli, Amy K.; Johnson, Stephen B.

    2010-01-01

    The NASA Ares I Crew Launch Vehicle is being designed to support missions to the International Space Station (ISS), to the Moon, and beyond. The Ares I is undergoing design and development utilizing commercial-off-the-shelf tools and hardware when applicable, along with cutting edge launch technologies and state-of-the-art design and development. In support of the vehicle s design and development, the Ares Functional Fault Analysis group was tasked to develop an Ares Vehicle Diagnostic Model (AVDM) and to demonstrate the capability of that model to support failure-related analyses and design integration. One important component of the AVDM is the Upper Stage (US) Thrust Vector Control (TVC) diagnostic model-a representation of the failure space of the US TVC subsystem. This paper first presents an overview of the AVDM, its development approach, and the software used to implement the model and conduct diagnostic analysis. It then uses the US TVC diagnostic model to illustrate details of the development, implementation, analysis, and verification processes. Finally, the paper describes how the AVDM model can impact both design and ground operations, and how some of these impacts are being realized during discussions of US TVC diagnostic analyses with US TVC designers.

  5. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  6. Computational strategies for alternative single-step Bayesian regression models with large numbers of genotyped and non-genotyped animals.

    PubMed

    Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J

    2016-12-08

    Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.

  7. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan

    2002-01-01

    Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling

  8. Frequency, thermal and voltage supercapacitor characterization and modeling

    NASA Astrophysics Data System (ADS)

    Rafik, F.; Gualous, H.; Gallay, R.; Crausaz, A.; Berthon, A.

    A simple electrical model has been established to describe supercapacitor behaviour as a function of frequency, voltage and temperature for hybrid vehicle applications. The electrical model consists of 14 RLC elements, which have been determined from experimental data using electrochemical impedance spectroscopy (EIS) applied on a commercial supercapacitor. The frequency analysis has been extended for the first time to the millihertz range to take into account the leakage current and the charge redistribution on the electrode. Simulation and experimental results of supercapacitor charge and discharge have been compared and analysed. A good correlation between the model and the EIS results has been demonstrated from 1 mHz to 1 kHz, from -20 to 60 °C and from 0 to 2.5 V.

  9. Solar Power System Options for the Radiation and Technology Demonstration Spacecraft

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Haraburda, Francis M.; Riehl, John P.

    2000-01-01

    The Radiation and Technology Demonstration (RTD) Mission has the primary objective of demonstrating high-power (10 kilowatts) electric thruster technologies in Earth orbit. This paper discusses the conceptual design of the RTD spacecraft photovoltaic (PV) power system and mission performance analyses. These power system studies assessed multiple options for PV arrays, battery technologies and bus voltage levels. To quantify performance attributes of these power system options, a dedicated Fortran code was developed to predict power system performance and estimate system mass. The low-thrust mission trajectory was analyzed and important Earth orbital environments were modeled. Baseline power system design options are recommended on the basis of performance, mass and risk/complexity. Important findings from parametric studies are discussed and the resulting impacts to the spacecraft design and cost.

  10. [On the present situation in psychotherapy and its implications - A critical analysis of the facts].

    PubMed

    Tschuschke, Volker; Freyberger, Harald J

    2015-01-01

    The currently dominating research paradigm in evidence-based medicine is expounded and discussed regarding the problems deduced from so-called empirically supported treatments (EST) in psychology and psychotherapy. Prevalent political and economic as well as ideological backgrounds influence the present dominance of the medical model in psychotherapy by implementing the randomized-controlled research design as the standard in the field. It has been demonstrated that randomized controlled trials (RCTs) are inadequate in psychotherapy research, not the least because of the high complexity of the psychotherapy and the relatively weak role of the treatment concept in the change process itself. All major meta-analyses show that the Dodo bird verdict is still alive, thereby demonstrating that the medical model in psychotherapy with its RCT paradigm cannot explain the equivalence paradox. The medical model is inappropriate, so that the contextual model is proposed as an alternative. Extensive process-outcome research is suggested as the only viable and reasonable way to identify highly complex interactions between the many factors regularly involved in change processes in psychotherapy.

  11. Turbine blade forced response prediction using FREPS

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha, V.; Morel, Michael R.

    1993-01-01

    This paper describes a software system called FREPS (Forced REsponse Prediction System) that integrates structural dynamic, steady and unsteady aerodynamic analyses to efficiently predict the forced response dynamic stresses in axial flow turbomachinery blades due to aerodynamic and mechanical excitations. A flutter analysis capability is also incorporated into the system. The FREPS system performs aeroelastic analysis by modeling the motion of the blade in terms of its normal modes. The structural dynamic analysis is performed by a finite element code such as MSC/NASTRAN. The steady aerodynamic analysis is based on nonlinear potential theory and the unsteady aerodynamic analyses is based on the linearization of the non-uniform potential flow mean. The program description and presentation of the capabilities are reported herein. The effectiveness of the FREPS package is demonstrated on the High Pressure Oxygen Turbopump turbine of the Space Shuttle Main Engine. Both flutter and forced response analyses are performed and typical results are illustrated.

  12. Modeling of Electrical Cable Failure in a Dynamic Assessment of Fire Risk

    NASA Astrophysics Data System (ADS)

    Bucknor, Matthew D.

    Fires at a nuclear power plant are a safety concern because of their potential to defeat the redundant safety features that provide a high level of assurance of the ability to safely shutdown the plant. One of the added complexities of providing protection against fires is the need to determine the likelihood of electrical cable failure which can lead to the loss of the ability to control or spurious actuation of equipment that is required for safe shutdown. A number of plants are now transitioning from their deterministic fire protection programs to a risk-informed, performance based fire protection program according to the requirements of National Fire Protection Association (NFPA) 805. Within a risk-informed framework, credit can be taken for the analysis of fire progression within a fire zone that was not permissible within the deterministic framework of a 10 CFR 50.48 Appendix R safe shutdown analysis. To perform the analyses required for the transition, plants need to be able to demonstrate with some level of assurance that cables related to safe shutdown equipment will not be compromised during postulated fire scenarios. This research contains the development of new cable failure models that have the potential to more accurately predict electrical cable failure in common cable bundle configurations. Methods to determine the thermal properties of the new models from empirical data are presented along with comparisons between the new models and existing techniques used in the nuclear industry today. A Dynamic Event Tree (DET) methodology is also presented which allows for the proper treatment of uncertainties associated with fire brigade intervention and its effects on cable failure analysis. Finally a shielding analysis is performed to determine the effects on the temperature response of a cable bundle that is shielded from a fire source by an intervening object such as another cable tray. The results from the analyses demonstrate that models of similar complexity to existing cable failure techniques and tuned to empirical data can better approximate the temperature response of a cables located in tightly packed cable bundles. The new models also provide a way to determine the conditions insides a cable bundle which allows for separate treatment of cables on the interior of the bundle from cables on the exterior of the bundle. The results from the DET analysis show that the overall assessed probability of cable failure can be significantly reduced by more realistically accounting for the influence that the fire brigade has on a fire progression scenario. The shielding analysis results demonstrate a significant reduction in the temperature response of a shielded versus a non-shielded cable bundle; however the computational cost of using a fire progression model that can capture these effects may be prohibitive for performing DET analyses with currently available computational fluid dynamics models and computational resources.

  13. Fun with maths: exploring implications of mathematical models for malaria eradication.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A

    2014-12-11

    Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.

  14. Modelling of long-term and short-term mechanisms of arterial pressure control in the cardiovascular system: an object-oriented approach.

    PubMed

    Fernandez de Canete, J; Luque, J; Barbancho, J; Munoz, V

    2014-04-01

    A mathematical model that provides an overall description of both the short- and long-term mechanisms of arterial pressure regulation is presented. Short-term control is exerted through the baroreceptor reflex while renal elimination plays a role in long-term control. Both mechanisms operate in an integrated way over the compartmental model of the cardiovascular system. The whole system was modelled in MODELICA, which uses a hierarchical object-oriented modelling strategy, under the DYMOLA simulation environment. The performance of the controlled system was analysed by simulation in light of the existing hypothesis and validation tests previously performed with physiological data, demonstrating the effectiveness of both regulation mechanisms under physiological and pathological conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Confronting Alternative Cosmological Models with the Highest-Redshift Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Shafer, Daniel; Scolnic, Daniel; Riess, Adam

    2018-01-01

    High-redshift Type Ia supernovae (SNe Ia) from the HST CANDELS and CLASH programs significantly extend the Hubble diagram with 7 SNe at z > 1.5 suitable for cosmology, including one at z = 2.3. This unique leverage helps us distinguish "alternative" cosmological models from the standard Lambda-CDM model. Analyzing the Pantheon SN compilation, which includes these high-z SNe, we employ model comparison statistics to quantify the extent to which several proposed alternative expansion histories (e.g., empty universe, power law expansion, timescape cosmology) are disfavored even with SN Ia data alone. Using mock data, we demonstrate that some likelihood analyses used in the literature to support these models are sensitive to unrealistic assumptions and are therefore unsuitable for analysis of realistic SN Ia data.

  16. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  17. Can a toxin gene NAAT be used to predict toxin EIA and the severity of Clostridium difficile infection?

    PubMed

    Garvey, Mark I; Bradley, Craig W; Wilkinson, Martyn A C; Holden, Elisabeth

    2017-01-01

    Diagnosis of C. difficile infection (CDI) is controversial because of the many laboratory methods available and their lack of ability to distinguish between carriage, mild or severe disease. Here we describe whether a low C. difficile toxin B nucleic acid amplification test (NAAT) cycle threshold (CT) can predict toxin EIA, CDI severity and mortality. A three-stage algorithm was employed for CDI testing, comprising a screening test for glutamate dehydrogenase (GDH), followed by a NAAT, then a toxin enzyme immunoassay (EIA). All diarrhoeal samples positive for GDH and NAAT between 2012 and 2016 were analysed. The performance of the NAAT CT value as a classifier of toxin EIA outcome was analysed using a ROC curve; patient mortality was compared to CTs and toxin EIA via linear regression models. A CT value ≤26 was associated with ≥72% toxin EIA positivity; applying a logistic regression model we demonstrated an association between low CT values and toxin EIA positivity. A CT value of ≤26 was significantly associated ( p  = 0.0262) with increased one month mortality, severe cases of CDI or failure of first line treatment. The ROC curve probabilities demonstrated a CT cut off value of 26.6. Here we demonstrate that a CT ≤26 indicates more severe CDI and is associated with higher mortality. Samples with a low CT value are often toxin EIA positive, questioning the need for this additional EIA test. A CT ≤26 could be used to assess the potential for severity of CDI and guide patient treatment.

  18. A variance-decomposition approach to investigating multiscale habitat associations

    USGS Publications Warehouse

    Lawler, J.J.; Edwards, T.C.

    2006-01-01

    The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.

  19. Evolution and taxonomic split of the model grass Brachypodium distachyon

    PubMed Central

    Catalán, Pilar; Müller, Jochen; Hasterok, Robert; Jenkins, Glyn; Mur, Luis A. J.; Langdon, Tim; Betekhtin, Alexander; Siwinska, Dorota; Pimentel, Manuel; López-Alvarez, Diana

    2012-01-01

    Background and Aims Brachypodium distachyon is being widely investigated across the world as a model plant for temperate cereals. This annual plant has three cytotypes (2n =  10, 20, 30) that are still regarded as part of a single species. Here, a multidisciplinary study has been conducted on a representative sampling of the three cytotypes to investigate their evolutionary relationships and origins, and to elucidate if they represent separate species. Methods Statistical analyses of 15 selected phenotypic traits were conducted in individuals from 36 lines or populations. Cytogenetic analyses were performed through flow cytometry, fluorescence in situ hybridization (FISH) with genomic (GISH) and multiple DNA sequences as probes, and comparative chromosome painting (CCP). Phylogenetic analyses were based on two plastid (ndhF, trnLF) and five nuclear (ITS, ETS, CAL, DGAT, GI) genes from different Brachypodium lineages, whose divergence times and evolutionary rates were estimated. Key Results The phenotypic analyses detected significant differences between the three cytotypes and demonstrated stability of characters in natural populations. Genome size estimations, GISH, FISH and CCP confirmed that the 2n = 10 and 2n = 20 cytotypes represent two different diploid taxa, whereas the 2n = 30 cytotype represents the allotetraploid derived from them. Phylogenetic analysis demonstrated that the 2n = 20 and 2n = 10 cytotypes emerged from two independent lineages that were, respectively, the maternal and paternal genome donors of the 2n = 30 cytotype. The 2n = 20 lineage was older and mutated significantly faster than the 2n = 10 lineage and all the core perennial Brachypodium species. Conclusions The substantial phenotypic, cytogenetic and molecular differences detected among the three B. distachyon sensu lato cytotypes are indicative of major speciation processes within this complex that allow their taxonomic separation into three distinct species. We have kept the name B. distachyon for the 2n = 10 cytotype and have described two novel species as B. stacei and B. hybridum for, respectively, the 2n = 20 and 2n = 30 cytotypes. PMID:22213013

  20. A chance-constrained stochastic approach to intermodal container routing problems.

    PubMed

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  1. A chance-constrained stochastic approach to intermodal container routing problems

    PubMed Central

    Zhao, Yi; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389

  2. Quantitative precipitation forecasts in the Alps - an assessment from the Forecast Demonstration Project MAP D-PHASE

    NASA Astrophysics Data System (ADS)

    Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.

    2009-04-01

    The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.

  3. A Quantitative Analysis of Latino Acculturation and Alcohol Use: Myth Versus Reality.

    PubMed

    Alvarez, Miriam J; Frietze, Gabriel; Ramos, Corin; Field, Craig; Zárate, Michael A

    2017-07-01

    Research on health among Latinos often focuses on acculturation processes and the associated stressors that influence drinking behavior. Given the common use of acculturation measures and the state of the knowledge on alcohol-related health among Latino populations, the current analyses tested the efficacy of acculturation measures to predict various indicators of alcohol consumption. Specifically, this quantitative review assessed the predictive utility of acculturation on alcohol consumption behaviors (frequency, volume, and quantity). Two main analyses were conducted-a p-curve analysis and a meta-analysis of the observed associations between acculturation and drinking behavior. Results demonstrated that current measures of acculturation are a statistically significant predictor of alcohol use (Z = -20.75, p < 0.0001). The meta-analysis included a cumulative sample size of 29,589 Latino participants across 31 studies. A random-effects model yielded a weighted average correlation of 0.16 (95% confidence interval = 0.12, 0.19). Additional subgroup analyses examined the effects of gender and using different scales to measure acculturation. Altogether, results demonstrated that acculturation is a useful predictor of alcohol use. In addition, the meta-analysis revealed that a small positive correlation exists between acculturation and alcohol use in Latinos with a between-study variance of only 1.5% (τ 2  = 0.015). Our analyses reveal that the association between current measures of acculturation and alcohol use is relatively small. Copyright © 2017 by the Research Society on Alcoholism.

  4. TEAM-HF Cost-Effectiveness Model: A Web-Based Program Designed to Evaluate the Cost-Effectiveness of Disease Management Programs in Heart Failure

    PubMed Central

    Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.

    2015-01-01

    Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504

  5. Assessing Contract Management Maturity: U.S. Army Joint Munitions and Lethality Contracting Center, Army Contracting Command, Picatinny Arsenal

    DTIC Science & Technology

    2009-09-01

    demonstrate competence” ( Jones , 2006). Most organizations conduct performance management for their employees with actions such as, setting goals...take the 14 form of comprehensive questionnaires, SWOT analyses or diagnostic models and include a comparison of results to various best practices...procnet.pica.army.mil/ Jones , R. (2006). CIDA Organizational Assessment Guide. Retrieved on March 1, 2009 from: http://www.acdi-cida.gc.ca/CIDAWEB

  6. Analyses related to the development of DSM-5 criteria for substance use related disorders: 1. Toward amphetamine, cocaine and prescription drug use disorder continua using Item Response Theory.

    PubMed

    Saha, Tulshi D; Compton, Wilson M; Chou, S Patricia; Smith, Sharon; Ruan, W June; Huang, Boji; Pickering, Roger P; Grant, Bridget F

    2012-04-01

    Prior research has demonstrated the dimensionality of alcohol, nicotine and cannabis use disorders criteria. The purpose of this study was to examine the unidimensionality of DSM-IV cocaine, amphetamine and prescription drug abuse and dependence criteria and to determine the impact of elimination of the legal problems criterion on the information value of the aggregate criteria. Factor analyses and Item Response Theory (IRT) analyses were used to explore the unidimensionality and psychometric properties of the illicit drug use criteria using a large representative sample of the U.S. population. All illicit drug abuse and dependence criteria formed unidimensional latent traits. For amphetamines, cocaine, sedatives, tranquilizers and opioids, IRT models fit better for models without legal problems criterion than models with legal problems criterion and there were no differences in the information value of the IRT models with and without the legal problems criterion, supporting the elimination of that criterion. Consistent with findings for alcohol, nicotine and cannabis, amphetamine, cocaine, sedative, tranquilizer and opioid abuse and dependence criteria reflect underlying unitary dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the American Psychiatric Association's DSM-5 Substance and Related Disorders Workgroup. Published by Elsevier Ireland Ltd.

  7. Intra- and Inter-Individual Differences in Adolescent Depressive Mood: the Role of Relationships with Parents and Friends.

    PubMed

    Zhang, Shiyu; Baams, Laura; van de Bongardt, Daphne; Dubas, Judith Semon

    2018-05-01

    Utilizing four waves of data from 1126 secondary school Dutch adolescents (Mage = 13.95 at the first wave; 53% boys), the current study examined the interplay between parent-adolescent and friend-adolescent relationship quality (satisfaction and conflict) in relation to adolescents' depressive mood. Using multilevel analyses, the interacting effects of parent/friend relationship quality on depressive mood were tested at both the intra- and inter-individual level. Analyses at the intra-individual level investigated whether individual depressive mood fluctuated along with changes in their social relationships regardless of one's general level of depressive mood; and analyses at the inter-individual level examined whether the average differences in depressive mood between adolescents were associated with different qualities of social relationships. We interpreted the patterns of interactions between parent and friend relationships using four theoretical models: the reinforcement, toxic friends, compensation, and additive model. The results demonstrate the covariation of parent- and friend- relationship quality with adolescents' depressive mood, and highlight that parent and peer effects are not independent from each other-affirming the compensation and additive models at the intra-individual and the reinforcement and additive models at the inter-individual level. The findings highlight the robustness of the protective effects of parent and peer support and the deleterious effects of conflictual relationships for adolescent mental health. The results have implications for both the theoretical and practical design of (preventive) interventions aimed at decreasing adolescents' depressive mood.

  8. CRADA opportunities in removal of particulates from hot-gas streams by filtration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D H

    1995-06-01

    Our analyses of samples and operating data from the Pressurized Fluidized Bed Combustion (PFBC), cyclone, and filtration units of the Tidd Clean Coal demonstration facility show that calcined dolomitic sorbent reacted with SO{sub 2} (and O{sub 2}) to form Sulfates (CaSO{sub 4} and CaMgn [SO{sub 4}]n+1) not only in the PFBC bed, but also in the filtration vessel. Analyses of limited data from the journal literature suggest that the filter-vessel reactions may have produced sulfate {open_quotes}necks,{close_quotes} which bonded the particles together, thus substantially increasing the critical angle of repose and shear tensile strengths of the filtered powders. This proposed mechanismmore » rationalizes the {open_quotes}bridging{close_quotes} and other particle-accumulation problems that caused filter breakage. Engineering services potentially available to resolve these problems include elucidation and modeling of ex-situ and in-situ filter-vessel chemistry, measurement and modeling of particulate materials properties, and measurement and modeling of cleaning back-pulse aerodynamics and cleaning efficiencies.« less

  9. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Combating substance abuse with ibogaine: pre- and posttreatment recommendations and an example of successive model fitting analyses.

    PubMed

    Hittner, James B; Quello, Susan B

    2004-06-01

    Ibogaine is an indole alkaloid derived from the root bark of the African shrub Tabernan the iboga and it has been used for many years as a medicinal and ceremonial agent in West Central Africa. Furthermore, both anecdotal observations and recent studies suggest that ibogaine alleviates withdrawal symptoms and reduces drug cravings. Although ibogaine articles typically include information bearing on the duration of drug abstinence following treatment, little if any attention is given to the psychological and environmental factors that might facilitate a positive treatment outcome. Hence, a major purpose of the present review is to suggest a number of theory-driven, pretreatment and posttreatment recommendations that have good potential for enhancing ibogaine's effectiveness. The second major purpose of this review is to demonstrate, through a reanalysis of previously published results, the utility of conducting successive model fitting analyses on ibogaine treatment data. Such analyses are useful for determining both the strength and form of the association between pre-ibogaine treatment variables and post-ibogaine treatment outcomes. Finally, in order to facilitate future quantitative reviews, the authors recommend that a minimum set of patient- and treatment-related variables be included in all ibogaine publications involving human participants.

  11. Living on the edge of chaos: minimally nonlinear models of genetic regulatory dynamics.

    PubMed

    Hanel, Rudolf; Pöchacker, Manfred; Thurner, Stefan

    2010-12-28

    Linearized catalytic reaction equations (modelling, for example, the dynamics of genetic regulatory networks), under the constraint that expression levels, i.e. molecular concentrations of nucleic material, are positive, exhibit non-trivial dynamical properties, which depend on the average connectivity of the reaction network. In these systems, an inflation of the edge of chaos and multi-stability have been demonstrated to exist. The positivity constraint introduces a nonlinearity, which makes chaotic dynamics possible. Despite the simplicity of such minimally nonlinear systems, their basic properties allow us to understand the fundamental dynamical properties of complex biological reaction networks. We analyse the Lyapunov spectrum, determine the probability of finding stationary oscillating solutions, demonstrate the effect of the nonlinearity on the effective in- and out-degree of the active interaction network, and study how the frequency distributions of oscillatory modes of such a system depend on the average connectivity.

  12. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  13. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  14. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  15. Empirically Derived Personality Subtyping for Predicting Clinical Symptoms and Treatment Response in Bulimia Nervosa

    PubMed Central

    Haynos, Ann F.; Pearson, Carolyn M.; Utzinger, Linsey M.; Wonderlich, Stephen A.; Crosby, Ross D.; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.

    2016-01-01

    Objective Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Methods Using variables from the Dimensional Assessment of Personality Pathology–Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). Results There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = .03) and purging (p = .01) frequency at EOT and binge eating frequency at follow-up (p = .045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = .04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Discussion Empirically derived personality subtyping is appears to be a valid classification system with potential to guide eating disorder treatment decisions. PMID:27611235

  16. Modelling and analysis of the sugar cataract development process using stochastic hybrid systems.

    PubMed

    Riley, D; Koutsoukos, X; Riley, K

    2009-05-01

    Modelling and analysis of biochemical systems such as sugar cataract development (SCD) are critical because they can provide new insights into systems, which cannot be easily tested with experiments; however, they are challenging problems due to the highly coupled chemical reactions that are involved. The authors present a stochastic hybrid system (SHS) framework for modelling biochemical systems and demonstrate the approach for the SCD process. A novel feature of the framework is that it allows modelling the effect of drug treatment on the system dynamics. The authors validate the three sugar cataract models by comparing trajectories computed by two simulation algorithms. Further, the authors present a probabilistic verification method for computing the probability of sugar cataract formation for different chemical concentrations using safety and reachability analysis methods for SHSs. The verification method employs dynamic programming based on a discretisation of the state space and therefore suffers from the curse of dimensionality. To analyse the SCD process, a parallel dynamic programming implementation that can handle large, realistic systems was developed. Although scalability is a limiting factor, this work demonstrates that the proposed method is feasible for realistic biochemical systems.

  17. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  18. External validation of the diffuse intrinsic pontine glioma survival prediction model: a collaborative report from the International DIPG Registry and the SIOPE DIPG Registry.

    PubMed

    Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James

    2017-08-01

    We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.

  19. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Chan, Kwai S.; Lindholm, Ulric S.; Bodner, S. R.; Hill, Jeff T.; Weber, R. M.; Meyer, T. G.

    1986-01-01

    The results of the third year of work on a program which is part of the NASA Hot Section Technology program (HOST) are presented. The goals of this program are: (1) the development of unified constitutive models for rate dependent isotropic materials; and (2) the demonstration of the use of unified models in structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are those of Bodner-Partom and of Walker. A test procedure was developed for assisting the generation of a data base for the Bodner-Partom model using a relatively small number of specimens. This test procedure involved performing a tensile test at a temperature of interest that involves a succession of strain-rate changes. The results for B1900+Hf indicate that material constants related to hardening and thermal recovery can be obtained on the basis of such a procedure. Strain aging, thermal recovery, and unexpected material variations, however, preluded an accurate determination of the strain-rate sensitivity parameter is this exercise. The effects of casting grain size on the constitutive behavior of B1900+Hf were studied and no particular grain size effect was observed. A systematic procedure was also developed for determining the material constants in the Bodner-Partom model. Both the new test procedure and the method for determining material constants were applied to the alternate material, Mar-M247 . Test data including tensile, creep, cyclic and nonproportional biaxial (tension/torsion) loading were collected. Good correlations were obtained between the Bodner-Partom model and experiments. A literature survey was conducted to assess the effects of thermal history on the constitutive behavior of metals. Thermal history effects are expected to be present at temperature regimes where strain aging and change of microstructure are important. Possible modifications to the Bodner-Partom model to account for these effects are outlined. The use of a unified constitutive model for hot section component analyses was demonstrated by applying the Walker model and the MARC finite-element code to a B1900+Hf airfoil problem.

  20. Detection of ingested nitromethane and reliable creatinine assessment using multiple common analytical methods.

    PubMed

    Murphy, Christine M; Devlin, John J; Beuhler, Michael C; Cheifetz, Paul; Maynard, Susan; Schwartz, Michael D; Kacinko, Sherri

    2018-04-01

    Nitromethane, found in fuels used for short distance racing, model cars, and model airplanes, produces a falsely elevated serum creatinine with standard creatinine analysis via the Jaffé method. Erroneous creatinine elevation often triggers extensive testing, leads to inaccurate diagnoses, and delayed or inappropriate medical interventions. Multiple reports in the literature identify "enzymatic assays" as an alternative method to detect the true value of creatinine, but this ambiguity does not help providers translate what type of enzymatic assay testing can be done in real time to determine if there is indeed false elevation. We report seven cases of ingested nitromethane where creatinine was determined via Beckman Coulter ® analyser using the Jaffé method, Vitros ® analyser, or i-Stat ® point-of-care testing. Nitromethane was detected and semi-quantified using a common clinical toxic alcohol analysis method, and quantified by headspace-gas chromatography-mass spectrometry. When creatinine was determined using i-Stat ® point-of-care testing or a Vitros ® analyser, levels were within the normal range. Comparatively, all initial creatinine levels obtained via the Jaffé method were elevated. Nitromethane concentrations ranged from 42 to 310 μg/mL. These cases demonstrate reliable assessment of creatinine through other enzymatic methods using a Vitros ® analyser or i-STAT ® . Additionally, nitromethane is detectable and quantifiable using routine alcohols gas chromatography analysis and by headspace-gas chromatography-mass spectrometry.

  1. Thermal-Structural Analysis of PICA Tiles for Solar Tower Test

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Empey, Daniel M.; Squire, Thomas H.

    2009-01-01

    Thermal protection materials used in spacecraft heatshields are subjected to severe thermal and mechanical loading environments during re-entry into earth atmosphere. In order to investigate the reliability of PICA tiles in the presence of high thermal gradients as well as mechanical loads, the authors designed and conducted solar-tower tests. This paper presents the design and analysis work for this tests series. Coupled non-linear thermal-mechanical finite element analyses was conducted to estimate in-depth temperature distribution and stress contours for various cases. The first set of analyses performed on isolated PICA tile showed that stresses generated during the tests were below the PICA allowable limit and should not lead to any catastrophic failure during the test. The tests results were consistent with analytical predictions. The temperature distribution and magnitude of the measured strains were also consistent with predicted values. The second test series is designed to test the arrayed PICA tiles with various gap-filler materials. A nonlinear contact method is used to model the complex geometry with various tiles. The analyses for these coupons predict the stress contours in PICA and inside gap fillers. Suitable mechanical loads for this architecture will be predicted, which can be applied during the test to exceed the allowable limits and demonstrate failure modes. Thermocouple and strain-gauge data obtained from the solar tower tests will be used for subsequent analyses and validation of FEM models.

  2. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  3. Ribosome flow model with positive feedback

    PubMed Central

    Margaliot, Michael; Tuller, Tamir

    2013-01-01

    Eukaryotic mRNAs usually form a circular structure; thus, ribosomes that terminatae translation at the 3′ end can diffuse with increased probability to the 5′ end of the transcript, initiating another cycle of translation. This phenomenon describes ribosomal flow with positive feedback—an increase in the flow of ribosomes terminating translating the open reading frame increases the ribosomal initiation rate. The aim of this paper is to model and rigorously analyse translation with feedback. We suggest a modified version of the ribosome flow model, called the ribosome flow model with input and output. In this model, the input is the initiation rate and the output is the translation rate. We analyse this model after closing the loop with a positive linear feedback. We show that the closed-loop system admits a unique globally asymptotically stable equilibrium point. From a biophysical point of view, this means that there exists a unique steady state of ribosome distributions along the mRNA, and thus a unique steady-state translation rate. The solution from any initial distribution will converge to this steady state. The steady-state distribution demonstrates a decrease in ribosome density along the coding sequence. For the case of constant elongation rates, we obtain expressions relating the model parameters to the equilibrium point. These results may perhaps be used to re-engineer the biological system in order to obtain a desired translation rate. PMID:23720534

  4. Feasibility of shutter-speed DCE-MRI for improved prostate cancer detection.

    PubMed

    Li, Xin; Priest, Ryan A; Woodward, William J; Tagge, Ian J; Siddiqui, Faisal; Huang, Wei; Rooney, William D; Beer, Tomasz M; Garzotto, Mark G; Springer, Charles S

    2013-01-01

    The feasibility of shutter-speed model dynamic-contrast-enhanced MRI pharmacokinetic analyses for prostate cancer detection was investigated in a prebiopsy patient cohort. Differences of results from the fast-exchange-regime-allowed (FXR-a) shutter-speed model version and the fast-exchange-limit-constrained (FXL-c) standard model are demonstrated. Although the spatial information is more limited, postdynamic-contrast-enhanced MRI biopsy specimens were also examined. The MRI results were correlated with the biopsy pathology findings. Of all the model parameters, region-of-interest-averaged K(trans) difference [ΔK(trans) ≡ K(trans)(FXR-a) - K(trans)(FXL-c)] or two-dimensional K(trans)(FXR-a) vs. k(ep)(FXR-a) values were found to provide the most useful biomarkers for malignant/benign prostate tissue discrimination (at 100% sensitivity for a population of 13, the specificity is 88%) and disease burden determination. (The best specificity for the fast-exchange-limit-constrained analysis is 63%, with the two-dimensional plot.) K(trans) and k(ep) are each measures of passive transcapillary contrast reagent transfer rate constants. Parameter value increases with shutter-speed model (relative to standard model) analysis are larger in malignant foci than in normal-appearing glandular tissue. Pathology analyses verify the shutter-speed model (FXR-a) promise for prostate cancer detection. Parametric mapping may further improve pharmacokinetic biomarker performance. Copyright © 2012 Wiley Periodicals, Inc.

  5. ICU early physical rehabilitation programs: financial modeling of cost savings.

    PubMed

    Lord, Robert K; Mayhew, Christopher R; Korupolu, Radha; Mantheiy, Earl C; Friedman, Michael A; Palmer, Jeffrey B; Needham, Dale M

    2013-03-01

    To evaluate the potential annual net cost savings of implementing an ICU early rehabilitation program. Using data from existing publications and actual experience with an early rehabilitation program in the Johns Hopkins Hospital Medical ICU, we developed a model of net financial savings/costs and presented results for ICUs with 200, 600, 900, and 2,000 annual admissions, accounting for both conservative- and best-case scenarios. Our example scenario provided a projected financial analysis of the Johns Hopkins Medical ICU early rehabilitation program, with 900 admissions per year, using actual reductions in length of stay achieved by this program. U.S.-based adult ICUs. Financial modeling of the introduction of an ICU early rehabilitation program. Net cost savings generated in our example scenario, with 900 annual admissions and actual length of stay reductions of 22% and 19% for the ICU and floor, respectively, were $817,836. Sensitivity analyses, which used conservative- and best-case scenarios for length of stay reductions and varied the per-day ICU and floor costs, across ICUs with 200-2,000 annual admissions, yielded financial projections ranging from -$87,611 (net cost) to $3,763,149 (net savings). Of the 24 scenarios included in these sensitivity analyses, 20 (83%) demonstrated net savings, with a relatively small net cost occurring in the remaining four scenarios, mostly when simultaneously combining the most conservative assumptions. A financial model, based on actual experience and published data, projects that investment in an ICU early rehabilitation program can generate net financial savings for U.S. hospitals. Even under the most conservative assumptions, the projected net cost of implementing such a program is modest relative to the substantial improvements in patient outcomes demonstrated by ICU early rehabilitation programs.

  6. Mesenchymal stem cells improve locomotor recovery in traumatic spinal cord injury: systematic review with meta-analyses of rat models.

    PubMed

    Oliveri, Roberto S; Bello, Segun; Biering-Sørensen, Fin

    2014-02-01

    Traumatic spinal cord injury (SCI) is a devastating event with huge personal and societal costs. A limited number of treatments exist to ameliorate the progressive secondary damage that rapidly follows the primary mechanical impact. Mesenchymal stem or stromal cells (MSCs) have anti-inflammatory and neuroprotective effects and may thus reduce secondary damage after administration. We performed a systematic review with quantitative syntheses to assess the evidence of MSCs versus controls for locomotor recovery in rat models of traumatic SCI, and identified 83 eligible controlled studies comprising a total of 1,568 rats. Between-study heterogeneity was large. Fifty-three studies (64%) were reported as randomised, but only four reported adequate methodologies for randomisation. Forty-eight studies (58%) reported the use of a blinded outcome assessment. A random-effects meta-analysis yielded a difference in behavioural Basso-Beattie-Bresnahan (BBB) locomotor score means of 3.9 (95% confidence interval [CI] 3.2 to 4.7; P<0.001) in favour of MSCs. Trial sequential analysis confirmed the findings of the meta-analyses with the upper monitoring boundary for benefit being crossed by the cumulative Z-curve before reaching the diversity-adjusted required information size. Only time from intervention to last follow-up remained statistically significant after adjustment using multivariate random-effects meta-regression modelling. Lack of other demonstrable explanatory variables could be due to insufficient meta-analytic study power. MSCs would seem to demonstrate a substantial beneficial effect on locomotor recovery in a widely-used animal model of traumatic SCI. However, the animal results should be interpreted with caution concerning the internal and external validity of the studies in relation to the design of future clinical trials. © 2013.

  7. Dimension- and shape-dependent thermal transport in nano-patterned thin films investigated by scanning thermal microscopy

    NASA Astrophysics Data System (ADS)

    Ge, Yunfei; Zhang, Yuan; Weaver, Jonathan M. R.; Dobson, Phillip S.

    2017-12-01

    Scanning thermal microscopy (SThM) is a technique which is often used for the measurement of the thermal conductivity of materials at the nanometre scale. The impact of nano-scale feature size and shape on apparent thermal conductivity, as measured using SThM, has been investigated. To achieve this, our recently developed topography-free samples with 200 and 400 nm wide gold wires (50 nm thick) of length of 400-2500 nm were fabricated and their thermal resistance measured and analysed. This data was used in the development and validation of a rigorous but simple heat transfer model that describes a nanoscopic contact to an object with finite shape and size. This model, in combination with a recently proposed thermal resistance network, was then used to calculate the SThM probe signal obtained by measuring these features. These calculated values closely matched the experimental results obtained from the topography-free sample. By using the model to analyse the dimensional dependence of thermal resistance, we demonstrate that feature size and shape has a significant impact on measured thermal properties that can result in a misinterpretation of material thermal conductivity. In the case of a gold nanowire embedded within a silicon nitride matrix it is found that the apparent thermal conductivity of the wire appears to be depressed by a factor of twenty from the true value. These results clearly demonstrate the importance of knowing both probe-sample thermal interactions and feature dimensions as well as shape when using SThM to quantify material thermal properties. Finally, the new model is used to identify the heat flux sensitivity, as well as the effective contact size of the conventional SThM system used in this study.

  8. Spatial extent of a hydrothermal system at Kilauea Volcano, Hawaii, determined from array analyses of shallow long-period seismicity 1. Method

    USGS Publications Warehouse

    Almendros, J.; Chouet, B.; Dawson, P.

    2001-01-01

    We present a probabilistic method to locate the source of seismic events using seismic antennas. The method is based on a comparison of the event azimuths and slownesses derived from frequency-slowness analyses of array data, with a slowness vector model. Several slowness vector models are considered including both homogeneous and horizontally layered half-spaces and also a more complex medium representing the actual topography and three-dimensional velocity structure of the region under study. In this latter model the slowness vector is obtained from frequency-slowness analyses of synthetic signals. These signals are generated using the finite difference method and include the effects of topography and velocity structure to reproduce as closely as possible the behavior of the observed wave fields. A comparison of these results with those obtained with a homogeneous half-space demonstrates the importance of structural and topographic effects, which, if ignored, lead to a bias in the source location. We use synthetic seismograms to test the accuracy and stability of the method and to investigate the effect of our choice of probability distributions. We conclude that this location method can provide the source position of shallow events within a complex volcanic structure such as Kilauea Volcano with an error of ??200 m. Copyright 2001 by the American Geophysical Union.

  9. Examination of an eHealth literacy scale and a health literacy scale in a population with moderate to high cardiovascular risk: Rasch analyses.

    PubMed

    Richtering, Sarah S; Morris, Rebecca; Soh, Sze-Ee; Barker, Anna; Bampi, Fiona; Neubeck, Lis; Coorey, Genevieve; Mulley, John; Chalmers, John; Usherwood, Tim; Peiris, David; Chow, Clara K; Redfern, Julie

    2017-01-01

    Electronic health (eHealth) strategies are evolving making it important to have valid scales to assess eHealth and health literacy. Item response theory methods, such as the Rasch measurement model, are increasingly used for the psychometric evaluation of scales. This paper aims to examine the internal construct validity of an eHealth and health literacy scale using Rasch analysis in a population with moderate to high cardiovascular disease risk. The first 397 participants of the CONNECT study completed the electronic health Literacy Scale (eHEALS) and the Health Literacy Questionnaire (HLQ). Overall Rasch model fit as well as five key psychometric properties were analysed: unidimensionality, response thresholds, targeting, differential item functioning and internal consistency. The eHEALS had good overall model fit (χ2 = 54.8, p = 0.06), ordered response thresholds, reasonable targeting and good internal consistency (person separation index (PSI) 0.90). It did, however, appear to measure two constructs of eHealth literacy. The HLQ subscales (except subscale 5) did not fit the Rasch model (χ2: 18.18-60.60, p: 0.00-0.58) and had suboptimal targeting for most subscales. Subscales 6 to 9 displayed disordered thresholds indicating participants had difficulty distinguishing between response options. All subscales did, nonetheless, demonstrate moderate to good internal consistency (PSI: 0.62-0.82). Rasch analyses demonstrated that the eHEALS has good measures of internal construct validity although it appears to capture different aspects of eHealth literacy (e.g. using eHealth and understanding eHealth). Whilst further studies are required to confirm this finding, it may be necessary for these constructs of the eHEALS to be scored separately. The nine HLQ subscales were shown to measure a single construct of health literacy. However, participants' scores may not represent their actual level of ability, as distinction between response categories was unclear for the last four subscales. Reducing the response categories of these subscales may improve the ability of the HLQ to distinguish between different levels of health literacy.

  10. Incremental value of the CT coronary calcium score for the prediction of coronary artery disease

    PubMed Central

    Genders, Tessa S. S.; Pugliese, Francesca; Mollet, Nico R.; Meijboom, W. Bob; Weustink, Annick C.; van Mieghem, Carlos A. G.; de Feyter, Pim J.

    2010-01-01

    Objectives: To validate published prediction models for the presence of obstructive coronary artery disease (CAD) in patients with new onset stable typical or atypical angina pectoris and to assess the incremental value of the CT coronary calcium score (CTCS). Methods: We searched the literature for clinical prediction rules for the diagnosis of obstructive CAD, defined as ≥50% stenosis in at least one vessel on conventional coronary angiography. Significant variables were re-analysed in our dataset of 254 patients with logistic regression. CTCS was subsequently included in the models. The area under the receiver operating characteristic curve (AUC) was calculated to assess diagnostic performance. Results: Re-analysing the variables used by Diamond & Forrester yielded an AUC of 0.798, which increased to 0.890 by adding CTCS. For Pryor, Morise 1994, Morise 1997 and Shaw the AUC increased from 0.838 to 0.901, 0.831 to 0.899, 0.840 to 0.898 and 0.833 to 0.899. CTCS significantly improved model performance in each model. Conclusions: Validation demonstrated good diagnostic performance across all models. CTCS improves the prediction of the presence of obstructive CAD, independent of clinical predictors, and should be considered in its diagnostic work-up. PMID:20559838

  11. Reciprocal Markov modeling of feedback mechanisms between emotion and dietary choice using experience sampling data

    PubMed Central

    Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H.

    2015-01-01

    With intensively collected longitudinal data, recent advances in Experience Sampling Method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well-equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet & Dube, 2011) that observed 160 participants’ food consumption and momentary emotions six times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal healthiness decision, the proposed Reciprocal Markov Model (RMM) can accommodate both hidden (“general” emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent to the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters. PMID:26717120

  12. Neutronic design studies of a conceptual DCLL fusion reactor for a DEMO and a commercial power plant

    NASA Astrophysics Data System (ADS)

    Palermo, I.; Veredas, G.; Gómez-Ros, J. M.; Sanz, J.; Ibarra, A.

    2016-01-01

    Neutronic analyses or, more widely, nuclear analyses have been performed for the development of a dual-coolant He/LiPb (DCLL) conceptual design reactor. A detailed three-dimensional (3D) model has been examined and optimized. The design is based on the plasma parameters and functional materials of the power plant conceptual studies (PPCS) model C. The initial radial-build for the detailed model has been determined according to the dimensions established in a previous work on an equivalent simplified homogenized reactor model. For optimization purposes, the initial specifications established over the simplified model have been refined on the detailed 3D design, modifying material and dimension of breeding blanket, shield and vacuum vessel in order to fulfil the priority requirements of a fusion reactor in terms of the fundamental neutronic responses. Tritium breeding ratio, energy multiplication factor, radiation limits in the TF coils, helium production and displacements per atom (dpa) have been calculated in order to demonstrate the functionality and viability of the reactor design in guaranteeing tritium self-sufficiency, power efficiency, plasma confinement, and re-weldability and structural integrity of the components. The paper describes the neutronic design improvements of the DCLL reactor, obtaining results for both DEMO and power plant operational scenarios.

  13. Phenotypic outcomes in Mouse and Human Foxc1 dependent Dandy-Walker cerebellar malformation suggest shared mechanisms

    PubMed Central

    Haldipur, Parthiv; Dang, Derek; Aldinger, Kimberly A; Janson, Olivia K; Guimiot, Fabien; Adle-Biasette, Homa; Dobyns, William B; Siebert, Joseph R; Russo, Rosa; Millen, Kathleen J

    2017-01-01

    FOXC1 loss contributes to Dandy-Walker malformation (DWM), a common human cerebellar malformation. Previously, we found that complete Foxc1 loss leads to aberrations in proliferation, neuronal differentiation and migration in the embryonic mouse cerebellum (Haldipur et al., 2014). We now demonstrate that hypomorphic Foxc1 mutant mice have granule and Purkinje cell abnormalities causing subsequent disruptions in postnatal cerebellar foliation and lamination. Particularly striking is the presence of a partially formed posterior lobule which echoes the posterior vermis DW 'tail sign' observed in human imaging studies. Lineage tracing experiments in Foxc1 mutant mouse cerebella indicate that aberrant migration of granule cell progenitors destined to form the posterior-most lobule causes this unique phenotype. Analyses of rare human del chr 6p25 fetal cerebella demonstrate extensive phenotypic overlap with our Foxc1 mutant mouse models, validating our DWM models and demonstrating that many key mechanisms controlling cerebellar development are likely conserved between mouse and human. DOI: http://dx.doi.org/10.7554/eLife.20898.001 PMID:28092268

  14. Conditional random matrix ensembles and the stability of dynamical systems

    NASA Astrophysics Data System (ADS)

    Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.

    2015-08-01

    Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.

  15. A Skilful Marine Sclerochronological Network Based Reconstruction of North Atlantic Subpolar Gyre Dynamics

    NASA Astrophysics Data System (ADS)

    Reynolds, D.; Hall, I. R.; Slater, S. M.; Scourse, J. D.; Wanamaker, A. D.; Halloran, P. R.; Garry, F. K.

    2017-12-01

    Spatial network analyses of precisely dated, and annually resolved, tree-ring proxy records have facilitated robust reconstructions of past atmospheric climate variability and the associated mechanisms and forcings that drive it. In contrast, a lack of similarly dated marine archives has constrained the use of such techniques in the marine realm, despite the potential for developing a more robust understanding of the role basin scale ocean dynamics play in the global climate system. Here we show that a spatial network of marine molluscan sclerochronological oxygen isotope (δ18Oshell) series spanning the North Atlantic region provides a skilful reconstruction of basin scale North Atlantic sea surface temperatures (SSTs). Our analyses demonstrate that the composite marine series (referred to as δ18Oproxy_PC1) is significantly sensitive to inter-annual variability in North Atlantic SSTs (R=-0.61 P<0.01) and surface air temperatures (SATs; R=-0.67, P<0.01) over the 20th century. Subpolar gyre (SPG) SSTs dominates variability in the δ18Oproxy_PC1 series at sub-centennial frequencies (R=-0.51, P<0.01). Comparison of the δ18Oproxy_PC1 series against variability in the strength of the European Slope Current and maximum North Atlantic meridional overturning circulation derived from numeric climate models (CMIP5), indicates that variability in the SPG region, associated with the strength of the surface currents of the North Atlantic, are playing a significant role in shaping the multi-decadal scale SST variability over the industrial era. These analyses demonstrate that spatial networks developed from sclerochronological archives can provide powerful baseline archives of past ocean variability that can facilitate the development of a quantitative understanding for the role the oceans play in the global climate systems and constraining uncertainties in numeric climate models.

  16. The CONCEPTS Global Ice-Ocean Prediction System: Establishing an Environmental Prediction Capability in Canada

    NASA Astrophysics Data System (ADS)

    Pellerin, Pierre; Smith, Gregory; Testut, Charles-Emmanuel; Surcel Colan, Dorina; Roy, Francois; Reszka, Mateusz; Dupont, Frederic; Lemieux, Jean-Francois; Beaudoin, Christiane; He, Zhongjie; Belanger, Jean-Marc; Deacu, Daniel; Lu, Yimin; Buehner, Mark; Davidson, Fraser; Ritchie, Harold; Lu, Youyu; Drevillon, Marie; Tranchant, Benoit; Garric, Gilles

    2015-04-01

    Here we describe a new system implemented recently at the Canadian Meteorological Centre (CMC) entitled the Global Ice Ocean Prediction System (GIOPS). GIOPS provides ice and ocean analyses and 10 day forecasts daily at 00GMT on a global 1/4° resolution grid. GIOPS includes a full multivariate ocean data assimilation system that combines satellite observations of sea level anomaly and sea surface temperature (SST) together with in situ observations of temperature and salinity. In situ observations are obtained from a variety of sources including: the Argo network of autonomous profiling floats, moorings, ships of opportunity, marine mammals and research cruises. Ocean analyses are blended with sea ice analyses produced by the Global Ice Analysis System.. GIOPS has been developed as part of the Canadian Operational Network of Coupled Environmental PredicTion Systems (CONCEPTS) tri-departmental initiative between Environment Canada, Fisheries and Oceans Canada and National Defense. The development of GIOPS was made through a partnership with Mercator-Océan, a French operational oceanography group. Mercator-Océan provided the ocean data assimilation code and assistance with the system implementation. GIOPS has undergone a rigorous evaluation of the analysis, trial and forecast fields demonstrating its capacity to provide high-quality products in a robust and reliable framework. In particular, SST and ice concentration forecasts demonstrate a clear benefit with respect to persistence. These results support the use of GIOPS products within other CMC operational systems, and more generally, as part of a Government of Canada marine core service. Impact of a two-way coupling between the GEM atmospheric model and NEMO-CICE ocean-ice model will also be presented.

  17. Mouse genetics and proteomic analyses demonstrate a critical role for complement in a model of DHRD/ML, an inherited macular degeneration

    PubMed Central

    Garland, Donita L.; Fernandez-Godino, Rosario; Kaur, Inderjeet; Speicher, Kaye D.; Harnly, James M.; Lambris, John D.; Speicher, David W.; Pierce, Eric A.

    2014-01-01

    Macular degenerations, inherited and age related, are important causes of vision loss. Human genetic studies have suggested perturbation of the complement system is important in the pathogenesis of age-related macular degeneration. The mechanisms underlying the involvement of the complement system are not understood, although complement and inflammation have been implicated in drusen formation. Drusen are an early clinical hallmark of inherited and age-related forms of macular degeneration. We studied one of the earliest stages of macular degeneration which precedes and leads to the formation of drusen, i.e. the formation of basal deposits. The studies were done using a mouse model of the inherited macular dystrophy Doyne Honeycomb Retinal Dystrophy/Malattia Leventinese (DHRD/ML) which is caused by a p.Arg345Trp mutation in EFEMP1. The hallmark of DHRD/ML is the formation of drusen at an early age, and gene targeted Efemp1R345W/R345W mice develop extensive basal deposits. Proteomic analyses of Bruch's membrane/choroid and Bruch's membrane in the Efemp1R345W/R345W mice indicate that the basal deposits comprise normal extracellular matrix (ECM) components present in abnormal amounts. The proteomic analyses also identified significant changes in proteins with immune-related function, including complement components, in the diseased tissue samples. Genetic ablation of the complement response via generation of Efemp1R345W/R345W:C3−/− double-mutant mice inhibited the formation of basal deposits. The results demonstrate a critical role for the complement system in basal deposit formation, and suggest that complement-mediated recognition of abnormal ECM may participate in basal deposit formation in DHRD/ML and perhaps other macular degenerations. PMID:23943789

  18. Precision of Multiple Reaction Monitoring Mass Spectrometry Analysis of Formalin-Fixed, Paraffin-Embedded Tissue

    PubMed Central

    2012-01-01

    We compared the reproducibility of multiple reaction monitoring (MRM) mass spectrometry-based peptide quantitation in tryptic digests from formalin-fixed, paraffin-embedded (FFPE) and frozen clear cell renal cell carcinoma tissues. The analyses targeted a candidate set of 114 peptides previously identified in shotgun proteomic analyses, of which 104 were detectable in FFPE and frozen tissue. Although signal intensities for MRM of peptides from FFPE tissue were on average 66% of those in frozen tissue, median coefficients of variation (CV) for measurements in FFPE and frozen tissues were nearly identical (18–20%). Measurements of lysine C-terminal peptides and arginine C-terminal peptides from FFPE tissue were similarly reproducible (19.5% and 18.3% median CV, respectively). We further evaluated the precision of MRM-based quantitation by analysis of peptides from the Her2 receptor in FFPE and frozen tissues from a Her2 overexpressing mouse xenograft model of breast cancer and in human FFPE breast cancer specimens. We obtained equivalent MRM measurements of HER2 receptor levels in FFPE and frozen mouse xenografts derived from HER2-overexpressing BT474 cells and HER2-negative Sum159 cells. MRM analyses of 5 HER2-positive and 5 HER-negative human FFPE breast tumors confirmed the results of immunohistochemical analyses, thus demonstrating the feasibility of HER2 protein quantification in FFPE tissue specimens. The data demonstrate that MRM analyses can be performed with equal precision on FFPE and frozen tissues and that lysine-containing peptides can be selected for quantitative comparisons, despite the greater impact of formalin fixation on lysine residues. The data further illustrate the feasibility of applying MRM to quantify clinically important tissue biomarkers in FFPE specimens. PMID:22530795

  19. Fracture Analyses of Cracked Delta Eye Plates in Ship Towing

    NASA Astrophysics Data System (ADS)

    Huang, Xiangbing; Huang, Xingling; Sun, Jizheng

    2018-01-01

    Based on fracture mechanics, a safety analysis approach is proposed for cracked delta eye plates in ship towing. The static analysis model is presented when the delta eye plate is in service, and the fracture criterion is introduced on basis of stress intensity factor, which is estimated with domain integral method. Subsequently, three-dimensional finite element analyses are carried out to obtain the effective stress intensity factors, and a case is studied to demonstrate the reasonability of the approach. The results show that the classical strength theory is not applicable to evaluate the cracked plate while fracture mechanics can solve the problem very well, and the load level, which a delta eye plate can carry on, decreases evidently when it is damaged.

  20. Predictive Techniques for Spacecraft Cabin Air Quality Control

    NASA Technical Reports Server (NTRS)

    Perry, J. L.; Cromes, Scott D. (Technical Monitor)

    2001-01-01

    As assembly of the International Space Station (ISS) proceeds, predictive techniques are used to determine the best approach for handling a variety of cabin air quality challenges. These techniques use equipment offgassing data collected from each ISS module before flight to characterize the trace chemical contaminant load. Combined with crew metabolic loads, these data serve as input to a predictive model for assessing the capability of the onboard atmosphere revitalization systems to handle the overall trace contaminant load as station assembly progresses. The techniques for predicting in-flight air quality are summarized along with results from early ISS mission analyses. Results from groundbased analyses of in-flight air quality samples are compared to the predictions to demonstrate the technique's relative conservatism.

  1. Evidence-based dentistry: a model for clinical practice.

    PubMed

    Faggion, Clóvis M; Tu, Yu-Kang

    2007-06-01

    Making decisions in dentistry should be based on the best evidence available. The objective of this study was to demonstrate a practical procedure and model that clinicians can use to apply the results of well-conducted studies to patient care by critically appraising the evidence with checklists and letter grade scales. To demonstrate application of this model for critically appraising the quality of research evidence, a hypothetical case involving an adult male with chronic periodontitis is used as an example. To determine the best clinical approach for this patient, a four-step, evidence-based model is demonstrated, consisting of the following: definition of a research question using the PICO format, search and selection of relevant literature, critical appraisal of identified research reports using checklists, and the application of evidence. In this model, the quality of research evidence was assessed quantitatively based on different levels of quality that are assigned letter grades of A, B, and C by evaluating the studies against the QUOROM (Quality of Reporting Meta-Analyses) and CONSORT (Consolidated Standards of Reporting Trials) checklists in a tabular format. For this hypothetical periodontics case, application of the model identified the best available evidence for clinical decision making, i.e., one randomized controlled trial and one systematic review of randomized controlled trials. Both studies showed similar answers for the research question. The use of a letter grade scale allowed an objective analysis of the quality of evidence. A checklist-driven model that assesses and applies evidence to dental practice may substantially improve dentists' decision making skill.

  2. A Stochastic Tick-Borne Disease Model: Exploring the Probability of Pathogen Persistence.

    PubMed

    Maliyoni, Milliward; Chirove, Faraimunashe; Gaff, Holly D; Govinder, Keshlan S

    2017-09-01

    We formulate and analyse a stochastic epidemic model for the transmission dynamics of a tick-borne disease in a single population using a continuous-time Markov chain approach. The stochastic model is based on an existing deterministic metapopulation tick-borne disease model. We compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in tick-borne disease dynamics. The probability of disease extinction and that of a major outbreak are computed and approximated using the multitype Galton-Watson branching process and numerical simulations, respectively. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that a disease outbreak is more likely if the disease is introduced by infected deer as opposed to infected ticks. These insights demonstrate the importance of host movement in the expansion of tick-borne diseases into new geographic areas.

  3. A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks

    PubMed Central

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372

  4. The rising impact of mathematical modelling in epidemiology: antibiotic resistance research as a case study

    PubMed Central

    TEMIME, L.; HEJBLUM, G.; SETBON, M.; VALLERON, A. J.

    2008-01-01

    SUMMARY Mathematical modelling of infectious diseases has gradually become part of public health decision-making in recent years. However, the developing status of modelling in epidemiology and its relationship with other relevant scientific approaches have never been assessed quantitatively. Herein, using antibiotic resistance as a case study, 60 published models were analysed. Their interactions with other scientific fields are reported and their citation impact evaluated, as well as temporal trends. The yearly number of antibiotic resistance modelling publications increased significantly between 1990 and 2006. This rise cannot be explained by the surge of interest in resistance phenomena alone. Moreover, modelling articles are, on average, among the most frequently cited third of articles from the journal in which they were published. The results of this analysis, which might be applicable to other emerging public health problems, demonstrate the growing interest in mathematical modelling approaches to evaluate antibiotic resistance. PMID:17767792

  5. Multicollinearity in hierarchical linear models.

    PubMed

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Simulating seasonal tropical cyclone intensities at landfall along the South China coast

    NASA Astrophysics Data System (ADS)

    Lok, Charlie C. F.; Chan, Johnny C. L.

    2018-04-01

    A numerical method is developed using a regional climate model (RegCM3) and the Weather Forecast and Research (WRF) model to predict seasonal tropical cyclone (TC) intensities at landfall for the South China region. In designing the model system, three sensitivity tests have been performed to identify the optimal choice of the RegCM3 model domain, WRF horizontal resolution and WRF physics packages. Driven from the National Centers for Environmental Prediction Climate Forecast System Reanalysis dataset, the model system can produce a reasonable distribution of TC intensities at landfall on a seasonal scale. Analyses of the model output suggest that the strength and extent of the subtropical ridge in the East China Sea are crucial to simulating TC landfalls in the Guangdong and Hainan provinces. This study demonstrates the potential for predicting TC intensities at landfall on a seasonal basis as well as projecting future climate changes using numerical models.

  7. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR LAND DISPOSAL OF RADIOACTIVE WASTE Licenses § 61... characteristics and design features in isolating and segregating the wastes. The analyses must clearly demonstrate... inadvertent intrusion must include demonstration that there is reasonable assurance the waste classification...

  8. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR LAND DISPOSAL OF RADIOACTIVE WASTE Licenses § 61... characteristics and design features in isolating and segregating the wastes. The analyses must clearly demonstrate... inadvertent intrusion must include demonstration that there is reasonable assurance the waste classification...

  9. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR LAND DISPOSAL OF RADIOACTIVE WASTE Licenses § 61... characteristics and design features in isolating and segregating the wastes. The analyses must clearly demonstrate... inadvertent intrusion must include demonstration that there is reasonable assurance the waste classification...

  10. 10 CFR 61.13 - Technical analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR LAND DISPOSAL OF RADIOACTIVE WASTE Licenses § 61... characteristics and design features in isolating and segregating the wastes. The analyses must clearly demonstrate... inadvertent intrusion must include demonstration that there is reasonable assurance the waste classification...

  11. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Persistent hemifacial spasm after microvascular decompression: a risk assessment model.

    PubMed

    Shah, Aalap; Horowitz, Michael

    2017-06-01

    Microvascular decompression (MVD) for hemifacial spasm (HFS) provides resolution of disabling symptoms such as eyelid twitching and muscle contractions of the entire hemiface. The primary aim of this study was to evaluate the predictive value of patient demographics and spasm characteristics on long-term outcomes, with or without intraoperative lateral spread response (LSR) as an additional variable in a risk assessment model. A retrospective study was undertaken to evaluate the associations of pre-operative patient characteristics, as well as intraoperative LSR and need for a staged procedure on the presence of persistent or recurrent HFS at the time of hospital discharge and at follow-up. A risk assessment model was constructed with the inclusion of six clinically or statistically significant variables from the univariate analyses. A receiving operator characteristic curve was generated, and area under the curve was calculated to determine the strength of the predictive model. A risk assessment model was first created consisting of significant pre-operative variables (Model 1) (age >50, female gender, history of botulinum toxin use, platysma muscle involvement). This model demonstrated borderline predictive value for persistent spasm at discharge (AUC .60; p=.045) and fair predictive value at follow-up (AUC .75; p=.001). Intraoperative variables (e.g. LSR persistence) demonstrated little additive value (Model 2) (AUC .67). Patients with a higher risk score (three or greater) demonstrated greater odds of persistent HFS at the time of discharge (OR 1.5 [95%CI 1.16-1.97]; p=.035), as well as greater odds of persistent or recurrent spasm at the time of follow-up (OR 3.0 [95%CI 1.52-5.95]; p=.002) Conclusions: A risk assessment model consisting of pre-operative clinical characteristics is useful in prognosticating HFS persistence at follow-up.

  13. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  14. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  15. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  16. On the limitations of General Circulation Climate Models

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Risbey, James S.

    1990-01-01

    General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.

  17. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  18. A canonical neural mechanism for behavioral variability

    NASA Astrophysics Data System (ADS)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  19. Predictors of responses to stress among families coping with poverty-related stress.

    PubMed

    Santiago, Catherine DeCarlo; Etter, Erica Moran; Wadsworth, Martha E; Raviv, Tali

    2012-05-01

    This study tested how poverty-related stress (PRS), psychological distress, and responses to stress predicted future effortful coping and involuntary stress responses one year later. In addition, we explored age, sex, ethnicity, and parental influences on responses to stress over time. Hierarchical linear modeling analyses conducted with 98 low-income families (300 family members: 136 adults, 82 school-aged children, 82 adolescents) revealed that primary control coping, secondary control coping, disengagement, involuntary engagement, and involuntary disengagement each significantly predicted future use of that response. Primary and secondary control coping also predicted less maladaptive future responses to stress, while involuntary responses to stress undermined the development of adaptive responding. Age, sex, and interactions among PRS and prior coping were also found to predict certain responses to stress. In addition, child subgroup analyses demonstrate the importance of parental modeling of coping and involuntary stress responses, and warmth/nurturance and monitoring practices. Results are discussed with regard to the implications for preventive interventions with families in poverty.

  20. Modelling Mathematical Reasoning in Physics Education

    NASA Astrophysics Data System (ADS)

    Uhden, Olaf; Karam, Ricardo; Pietrocola, Maurício; Pospiech, Gesche

    2012-04-01

    Many findings from research as well as reports from teachers describe students' problem solving strategies as manipulation of formulas by rote. The resulting dissatisfaction with quantitative physical textbook problems seems to influence the attitude towards the role of mathematics in physics education in general. Mathematics is often seen as a tool for calculation which hinders a conceptual understanding of physical principles. However, the role of mathematics cannot be reduced to this technical aspect. Hence, instead of putting mathematics away we delve into the nature of physical science to reveal the strong conceptual relationship between mathematics and physics. Moreover, we suggest that, for both prospective teaching and further research, a focus on deeply exploring such interdependency can significantly improve the understanding of physics. To provide a suitable basis, we develop a new model which can be used for analysing different levels of mathematical reasoning within physics. It is also a guideline for shifting the attention from technical to structural mathematical skills while teaching physics. We demonstrate its applicability for analysing physical-mathematical reasoning processes with an example.

  1. A meta-analysis of priming effects on impression formation supporting a general model of informational biases.

    PubMed

    DeCoster, Jamie; Claypool, Heather M

    2004-01-01

    Priming researchers have long investigated how providing information about traits in one context can influence the impressions people form of social targets in another. The literature has demonstrated that this can have 3 different effects: Sometimes primes become incorporated in the impression of the target (assimilation), sometimes they are used as standards of comparison (anchoring), and sometimes they cause people to consciously alter their judgments (correction). In this article, we present meta-analyses of these 3 effects. The mean effect size was significant in each case, such that assimilation resulted in impressions biased toward the primes, whereas anchoring and correction resulted in impressions biased away from the primes. Additionally, moderator analyses uncovered a number of variables that influence the strength of these effects, such as applicability, processing capacity, and the type of response measure. Based on these results, we propose a general model of how irrelevant information can bias judgments, detailing when and why assimilation and contrast effects result from default and corrective processes.

  2. Golden Eagle fatalities and the continental-scale consequences of local wind-energy generation

    USGS Publications Warehouse

    Katzner, Todd E.; Nelson, David M.; Braham, Melissa A.; Doyle, Jacqueline M.; Fernandez, Nadia B.; Duerr, Adam E.; Bloom, Peter H.; Fitzpatrick, Matthew C.; Miller, Tricia A.; Culver, Renee C. E.; Braswell, Loan; DeWoody, J. Andrew

    2017-01-01

    Renewable energy production is expanding rapidly despite mostly unknown environmental effects on wildlife and habitats. We used genetic and stable isotope data collected from Golden Eagles (Aquila chrysaetos) killed at the Altamont Pass Wind Resource Area (APWRA) in California in demographic models to test hypotheses about the geographic extent and demographic consequences of fatalities caused by renewable energy facilities. Geospatial analyses of δ2H values obtained from feathers showed that ≥25% of these APWRA-killed eagles were recent immigrants to the population, most from long distances away (>100 km). Data from nuclear genes indicated this subset of immigrant eagles was genetically similar to birds identified as locals from the δ2H data. Demographic models implied that in the face of this mortality, the apparent stability of the local Golden Eagle population was maintained by continental-scale immigration. These analyses demonstrate that ecosystem management decisions concerning the effects of local-scale renewable energy can have continental-scale consequences.

  3. Golden Eagle fatalities and the continental-scale consequences of local wind-energy generation.

    PubMed

    Katzner, Todd E; Nelson, David M; Braham, Melissa A; Doyle, Jacqueline M; Fernandez, Nadia B; Duerr, Adam E; Bloom, Peter H; Fitzpatrick, Matthew C; Miller, Tricia A; Culver, Renee C E; Braswell, Loan; DeWoody, J Andrew

    2017-04-01

    Renewable energy production is expanding rapidly despite mostly unknown environmental effects on wildlife and habitats. We used genetic and stable isotope data collected from Golden Eagles (Aquila chrysaetos) killed at the Altamont Pass Wind Resource Area (APWRA) in California in demographic models to test hypotheses about the geographic extent and demographic consequences of fatalities caused by renewable energy facilities. Geospatial analyses of δ 2 H values obtained from feathers showed that ≥25% of these APWRA-killed eagles were recent immigrants to the population, most from long distances away (>100 km). Data from nuclear genes indicated this subset of immigrant eagles was genetically similar to birds identified as locals from the δ 2 H data. Demographic models implied that in the face of this mortality, the apparent stability of the local Golden Eagle population was maintained by continental-scale immigration. These analyses demonstrate that ecosystem management decisions concerning the effects of local-scale renewable energy can have continental-scale consequences. © 2016 Society for Conservation Biology.

  4. Dynamics and stability of mechanical systems with follower forces

    NASA Technical Reports Server (NTRS)

    Herrmann, G.

    1971-01-01

    A monograph on problems of stability of equilibrium of mechanical systems with follower forces is presented. Concepts of stability and criteria of stability are reviewed briefly, together with means of analytical specification of follower forces. Nondissipative systems with two degrees of freedom are discussed, and destabilizing effects due to various types of dissipative forces both in discrete and continuous systems, are treated. The analyses are accompanied by some quantative experiments and observations on demonstrational laboratory models.

  5. Shape control of large space structures

    NASA Technical Reports Server (NTRS)

    Hagan, M. T.

    1982-01-01

    A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.

  6. Microlensing for extrasolar planets : improving the photometry

    NASA Astrophysics Data System (ADS)

    Bajek, David J.

    2013-08-01

    Gravitational Microlensing, as a technique for detecting Extrasolar Planets, is recognised for its potential in discovering small-mass planets similar to Earth, at a distance of a few Astronomical Units from their host stars. However, analysing the data from microlensing events (which statistically rarely reveal planets) is complex and requires continued and intensive use of various networks of telescopes working together in order to observe the phenomenon. As such the techniques are constantly being developed and refined; this project outlines some steps of the careful analysis required to model an event and ensure the best quality data is used in the fitting. A quantitative investigation into increasing the quality of the original photometric data available from any microlensing event demonstrates that 'lucky imaging' can lead to a marked improvement in the signal to noise ratio of images over standard imaging techniques, which could result in more accurate models and thus the calculation of more accurate planetary parameters. In addition, a simulation illustrating the effects of atmospheric turbulence on exposures was created, and expanded upon to give an approximation of the lucky imaging technique. This further demonstrated the advantages of lucky images which are shown to potentially approach the quality of those expected from diffraction limited photometry. The simulation may be further developed for potential future use as a 'theoretical lucky imager' in our research group, capable of producing and analysing synthetic exposures through customisable conditions.

  7. Optical eigenmodes for illumination & imaging

    NASA Astrophysics Data System (ADS)

    Kosmeier, Sebastian

    Gravitational Microlensing, as a technique for detecting Extrasolar Planets, is recognised for its potential in discovering small-mass planets similar to Earth, at a distance of a few Astronomical Units from their host stars. However, analysing the data from microlensing events (which statistically rarely reveal planets) is complex and requires continued and intensive use of various networks of telescopes working together in order to observe the phenomenon. As such the techniques are constantly being developed and refined; this project outlines some steps of the careful analysis required to model an event and ensure the best quality data is used in the fitting. A quantitative investigation into increasing the quality of the original photometric data available from any microlensing event demonstrates that 'lucky imaging' can lead to a marked improvement in the signal to noise ratio of images over standard imaging techniques, which could result in more accurate models and thus the calculation of more accurate planetary parameters. In addition, a simulation illustrating the effects of atmospheric turbulence on exposures was created, and expanded upon to give an approximation of the lucky imaging technique. This further demonstrated the advantages of lucky images which are shown to potentially approach the quality of those expected from diffraction limited photometry. The simulation may be further developed for potential future use as a 'theoretical lucky imager' in our research group, capable of producing and analysing synthetic exposures through customisable conditions.

  8. Dysfunctional role of parietal lobe during self-face recognition in schizophrenia.

    PubMed

    Yun, Je-Yeon; Hur, Ji-Won; Jung, Wi Hoon; Jang, Joon Hwan; Youn, Tak; Kang, Do-Hyung; Park, Sohee; Kwon, Jun Soo

    2014-01-01

    Anomalous sense of self is central to schizophrenia yet difficult to demonstrate empirically. The present study examined the effective neural network connectivity underlying self-face recognition in patients with schizophrenia (SZ) using [15O]H2O Positron Emission Tomography (PET) and Structural Equation Modeling. Eight SZ and eight age-matched healthy controls (CO) underwent six consecutive [15O]H2O PET scans during self-face (SF) and famous face (FF) recognition blocks, each of which was repeated three times. There were no behavioral performance differences between the SF and FF blocks in SZ. Moreover, voxel-based analyses of data from SZ revealed no significant differences in the regional cerebral blood flow (rCBF) levels between the SF and FF recognition conditions. Further effective connectivity analyses for SZ also showed a similar pattern of effective connectivity network across the SF and FF recognition. On the other hand, comparison of SF recognition effective connectivity network between SZ and CO demonstrated significantly attenuated effective connectivity strength not only between the right supramarginal gyrus and left inferior temporal gyrus, but also between the cuneus and right medial prefrontal cortex in SZ. These findings support a conceptual model that posits a causal relationship between disrupted self-other discrimination and attenuated effective connectivity among the right supramarginal gyrus, cuneus, and prefronto-temporal brain areas involved in the SF recognition network of SZ. © 2013.

  9. A trophic model of fringing coral reefs in Nanwan Bay, southern Taiwan suggests overfishing.

    PubMed

    Liu, Pi-Jen; Shao, Kwang-Tsao; Jan, Rong-Quen; Fan, Tung-Yung; Wong, Saou-Lien; Hwang, Jiang-Shiou; Chen, Jen-Ping; Chen, Chung-Chi; Lin, Hsing-Juh

    2009-09-01

    Several coral reefs of Nanwan Bay, Taiwan have recently undergone shifts to macroalgal or sea anemone dominance. Thus, a mass-balance trophic model was constructed to analyze the structure and functioning of the food web. The fringing reef model was comprised of 18 compartments, with the highest trophic level of 3.45 for piscivorous fish. Comparative analyses with other reef models demonstrated that Nanwan Bay was similar to reefs with high fishery catches. While coral biomass was not lower, fish biomass was lower than those of reefs with high catches. Consequently, the sums of consumption and respiratory flows and total system throughput were also decreased. The Nanwan Bay model potentially suggests an overfished status in which the mean trophic level of the catch, matter cycling, and trophic transfer efficiency are extremely reduced.

  10. Identification of Escherichia coli enterotoxin inhibitors from traditional medicinal herbs by in silico, in vitro, and in vivo analyses.

    PubMed

    Chen, Jaw-Chyun; Ho, Tin-Yun; Chang, Yuan-Shiun; Wu, Shih-Lu; Li, Chia-Cheng; Hsiang, Chien-Yun

    2009-01-30

    Glycyrrhiza uralensis has been used for the treatment of gastrointestinal disorders, such as diarrhea, in several ancient cultures. Glycyrrhizin is the principal component of liquorice and lots of pharmacological effects have been demonstrated. Heat-labile enterotoxin (LT), the virulence factor of enterotoxigenic Escherichia coli, induces diarrhea by initially binding to the GM1 on the surfaces of intestinal epithelial cells and consequently leading to the massive loss of fluid and ions from cells. Therefore, we evaluated the inhibitory effects of traditional medicinal herbs (TMH) on the B subunit of LT (LTB) and GM1 interaction. The inhibitory effects of TMH on LTB-GM1 interaction were evaluated by GM1-enzyme-linked immunosorbent assay (ELISA). The likely active phytochemicals of these TMH were then predicted by in silico model (docking) and analyzed by in vitro (GM1-ELISA) and in vivo (patent mouse gut assay) models. We found that various TMH, which have been ethnomedically used for the treatment of diarrhea, inhibited the LTB-GM1 interaction. Docking data showed that triterpenoids were the most active phytochemicals and the oleanane-type triterpenoids presented better LTB-binding abilities than other types of triterpenoids. Moreover, by in vitro and in vivo models, we demonstrated that glycyrrhizin was the most effective oleanane-type triterpenoid that significantly suppressed both the LTB-binding ability (IC50=3.26+/-0.17 mM) and the LT-induced fluid accumulation in mice. We found an LT inhibitor, glycyrrhizin, from TMH by in silico, in vitro, and in vivo analyses.

  11. Reality versus fantasy: reply to Lynn et al. (2014).

    PubMed

    Dalenberg, Constance J; Brand, Bethany L; Loewenstein, Richard J; Gleaves, David H; Dorahy, Martin J; Cardeña, Etzel; Frewen, Paul A; Carlson, Eve B; Spiegel, David

    2014-05-01

    We respond to Lynn et al.'s (2014) comments on our review (Dalenberg et al., 2012) demonstrating the superiority of the trauma model (TM) over the fantasy model (FM) in explaining the trauma-dissociation relationship. Lynn et al. conceded that our meta-analytic results support the TM hypothesis that trauma exposure is a causal risk factor for the development of dissociation. Although Lynn et al. suggested that our meta-analyses were selective, we respond that each omitted study failed to meet inclusion criteria; our meta-analyses thus reflect a balanced view of the predominant trauma-dissociation findings. In contrast, Lynn et al. were hypercritical of studies that supported the TM while ignoring methodological problems in studies presented as supportive of the FM. We clarify Lynn et al.'s misunderstandings of the TM and demonstrate consistent superiority in prediction of time course of dissociative symptoms, response to psychotherapy of dissociative patients, and pattern of relationships of trauma to dissociation. We defend our decision not to include studies using the Dissociative Experiences Scale-Comparison, a rarely used revision of the Dissociative Experiences Scale that shares less than 10% of the variance with the original scale. We highlight several areas of agreement: (a) Trauma plays a complex role in dissociation, involving indirect and direct paths; (b) dissociation-suggestibility relationships are small; and (c) controls and measurement issues should be addressed in future suggestibility and dissociation research. Considering the lack of evidence that dissociative individuals simply fantasize trauma, future researchers should examine more complex models of trauma and valid measures of dissociation.

  12. Bim and Gis: when Parametric Modeling Meets Geospatial Data

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Banfi, F.

    2017-12-01

    Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building) scale to the infrastructure (where geospatial data cannot be neglected) has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by "pure" GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator) industry, as well as new solutions for parametric modelling with additional geoinformation.

  13. Helicopter rotor wake geometry and its influence in forward flight. Volume 1: Generalized wake geometry and wake effect on rotor airloads and performance

    NASA Technical Reports Server (NTRS)

    Egolf, T. A.; Landgrebe, A. J.

    1983-01-01

    An analytic investigation to generalize wake geometry of a helicopter rotor in steady level forward flight and to demonstrate the influence of wake deformation in the prediction of rotor airloads and performance is described. Volume 1 presents a first level generalized wake model based on theoretically predicted tip vortex geometries for a selected representative blade design. The tip vortex distortions are generalized in equation form as displacements from the classical undistorted tip vortex geometry in terms of vortex age, blade azimuth, rotor advance ratio, thrust coefficient, and number of blades. These equations were programmed to provide distorted wake coordinates at very low cost for use in rotor airflow and airloads prediction analyses. The sensitivity of predicted rotor airloads, performance, and blade bending moments to the modeling of the tip vortex distortion are demonstrated for low to moderately high advance ratios for a representative rotor and the H-34 rotor. Comparisons with H-34 rotor test data demonstrate the effects of the classical, predicted distorted, and the newly developed generalized wake models on airloads and blade bending moments. Use of distorted wake models results in the occurrence of numerous blade-vortex interactions on the forward and lateral sides of the rotor disk. The significance of these interactions is related to the number and degree of proximity to the blades of the tip vortices. The correlation obtained with the distorted wake models (generalized and predicted) is encouraging.

  14. New Ground Truth Capability from InSAR Time Series Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, S; Vincent, P; Yang, D

    2005-07-13

    We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less

  15. Modeling the Relative Importance of Nutrient and Carbon Loads, Boundary Fluxes, and Sediment Fluxes on Gulf of Mexico Hypoxia.

    PubMed

    Feist, Timothy J; Pauer, James J; Melendez, Wilson; Lehrter, John C; DePetro, Phillip A; Rygwelski, Kenneth R; Ko, Dong S; Kreis, Russell G

    2016-08-16

    The Louisiana continental shelf in the northern Gulf of Mexico experiences bottom water hypoxia in the summer. In this study, we applied a biogeochemical model that simulates dissolved oxygen concentrations on the shelf in response to varying riverine nutrient and organic carbon loads, boundary fluxes, and sediment fluxes. Five-year model simulations demonstrated that midsummer hypoxic areas were most sensitive to riverine nutrient loads and sediment oxygen demand from settled organic carbon. Hypoxic area predictions were also sensitive to nutrient and organic carbon fluxes from lateral boundaries. The predicted hypoxic area decreased with decreases in nutrient loads, but the extent of change was influenced by the method used to estimate model boundary concentrations. We demonstrated that modeling efforts to predict changes in hypoxic area on the continental shelf in relationship to changes in nutrients should include representative boundary nutrient and organic carbon concentrations and functions for estimating sediment oxygen demand that are linked to settled organic carbon derived from water-column primary production. On the basis of our model analyses using the most representative boundary concentrations, nutrient loads would need to be reduced by 69% to achieve the Gulf of Mexico Nutrient Task Force Action Plan target hypoxic area of 5000 km(2).

  16. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.

  17. Evaluating mediation and moderation effects in school psychology: A presentation of methods and review of current practice

    PubMed Central

    Fairchild, Amanda J.; McQuillin, Samuel D.

    2017-01-01

    Third variable effects elucidate the relation between two other variables, and can describe why they are related or under what conditions they are related. This article demonstrates methods to analyze two third-variable effects: moderation and mediation. The utility of examining moderation and mediation effects in school psychology is described and current use of the analyses in applied school psychology research is reviewed and evaluated. Proper statistical methods to test the effects are presented, and different effect size measures for the models are provided. Extensions of the basic moderator and mediator models are also described. PMID:20006988

  18. Thermal analyses of the International Ultraviolet Explorer (IUE) scientific instrument using the NASTRAN thermal analyzer (NTA): A general purpose summary

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1976-01-01

    The NTA Level 15.5.2/3, was used to provide non-linear steady-state (NLSS) and non-linear transient (NLTR) thermal predictions for the International Ultraviolet Explorer (IUE) Scientific Instrument (SI). NASTRAN structural models were used as the basis for the thermal models, which were produced by a straight forward conversion procedure. The accuracy of this technique was sub-sequently demonstrated by a comparison of NTA predicts with the results of a thermal vacuum test of the IUE Engineering Test Unit (ETU). Completion of these tasks was aided by the use of NTA subroutines.

  19. Evaluating mediation and moderation effects in school psychology: a presentation of methods and review of current practice.

    PubMed

    Fairchild, Amanda J; McQuillin, Samuel D

    2010-02-01

    Third variable effects elucidate the relation between two other variables, and can describe why they are related or under what conditions they are related. This article demonstrates methods to analyze two third-variable effects: moderation and mediation. The utility of examining moderation and mediation effects in school psychology is described and current use of the analyses in applied school psychology research is reviewed and evaluated. Proper statistical methods to test the effects are presented, and different effect size measures for the models are provided. Extensions of the basic moderator and mediator models are also described.

  20. Human-modified temperatures induce species changes: Joint attribution.

    PubMed

    Root, Terry L; MacMynowski, Dena P; Mastrandrea, Michael D; Schneider, Stephen H

    2005-05-24

    Average global surface-air temperature is increasing. Contention exists over relative contributions by natural and anthropogenic forcings. Ecological studies attribute plant and animal changes to observed warming. Until now, temperature-species connections have not been statistically attributed directly to anthropogenic climatic change. Using modeled climatic variables and observed species data, which are independent of thermometer records and paleoclimatic proxies, we demonstrate statistically significant "joint attribution," a two-step linkage: human activities contribute significantly to temperature changes and human-changed temperatures are associated with discernible changes in plant and animal traits. Additionally, our analyses provide independent testing of grid-box-scale temperature projections from a general circulation model (HadCM3).

  1. Learning in Stochastic Bit Stream Neural Networks.

    PubMed

    van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu

    1996-08-01

    This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd

  2. Detection of regional air pollution episodes utilizing satellite digital data in the visual range

    NASA Technical Reports Server (NTRS)

    Burke, H.-H. K.

    1982-01-01

    Digital analyses of satellite visible data for selected high-sulfate cases over the northeastern U.S., on July 21 and 22, 1978, are compared with ground-based measurements. Quantitative information on total aerosol loading derived from the satellite digitized data using an atmospheric radiative transfer model is found to agree with the ground measurements, and it is shown that the extent and transport of the haze pattern may be monitored from the satellite data over the period of maximum intensity for the episode. Attention is drawn to the potential benefits of satellite monitoring of pollution episodes demonstrated by the model.

  3. Complete analysis of steady and transient missile aerodynamic/propulsive/plume flowfield interactions

    NASA Astrophysics Data System (ADS)

    York, B. J.; Sinha, N.; Dash, S. M.; Hosangadi, A.; Kenzakowski, D. C.; Lee, R. A.

    1992-07-01

    The analysis of steady and transient aerodynamic/propulsive/plume flowfield interactions utilizing several state-of-the-art computer codes (PARCH, CRAFT, and SCHAFT) is discussed. These codes have been extended to include advanced turbulence models, generalized thermochemistry, and multiphase nonequilibrium capabilities. Several specialized versions of these codes have been developed for specific applications. This paper presents a brief overview of these codes followed by selected cases demonstrating steady and transient analyses of conventional as well as advanced missile systems. Areas requiring upgrades include turbulence modeling in a highly compressible environment and the treatment of particulates in general. Recent progress in these areas are highlighted.

  4. Psychometric properties of the college survey for students with brain injury: individuals with and without traumatic brain injury.

    PubMed

    Kennedy, Mary R T; Krause, Miriam O; O'Brien, Katy H

    2014-01-01

    The psychometric properties of the college challenges sub-set from The College Survey for Students with Brain Injury (CSS-BI) were investigated with adults with and without traumatic brain injury (TBI). Adults with and without TBI completed the CSS-BI. A sub-set of participants with TBI were interviewed, intentional and convergent validity were investigated, and the internal structure of the college challenges was analysed with exploratory factor analysis/principle component analysis. Respondents with TBI understood the items describing college challenges with evidence of intentional validity. More individuals with TBI than controls endorsed eight of the 13 college challenges. Those who reported more health issues endorsed more college challenges, demonstrating preliminary convergent validity. Cronbach's alphas of >0.85 demonstrated acceptable internal reliability. Factor analysis revealed a four-factor model for those with TBI: studying and learning (Factor 1), time management and organization (Factor 2), social (Factor 3) and nervousness/anxiety (Factor 4). This model explained 72% and 69% of the variance for those with and without TBI, respectively. The college challenges sub-set from the CSS-BI identifies challenges that individuals with TBI face when going to college. Some challenges were related to two factors in the model, demonstrating the inter-connections of these experiences.

  5. Material failure modelling in metals at high strain rates

    NASA Astrophysics Data System (ADS)

    Panov, Vili

    2005-07-01

    Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.

  6. A rapid generalized least squares model for a genome-wide quantitative trait association analysis in families.

    PubMed

    Li, Xiang; Basu, Saonli; Miller, Michael B; Iacono, William G; McGue, Matt

    2011-01-01

    Genome-wide association studies (GWAS) using family data involve association analyses between hundreds of thousands of markers and a trait for a large number of related individuals. The correlations among relatives bring statistical and computational challenges when performing these large-scale association analyses. Recently, several rapid methods accounting for both within- and between-family variation have been proposed. However, these techniques mostly model the phenotypic similarities in terms of genetic relatedness. The familial resemblances in many family-based studies such as twin studies are not only due to the genetic relatedness, but also derive from shared environmental effects and assortative mating. In this paper, we propose 2 generalized least squares (GLS) models for rapid association analysis of family-based GWAS, which accommodate both genetic and environmental contributions to familial resemblance. In our first model, we estimated the joint genetic and environmental variations. In our second model, we estimated the genetic and environmental components separately. Through simulation studies, we demonstrated that our proposed approaches are more powerful and computationally efficient than a number of existing methods are. We show that estimating the residual variance-covariance matrix in the GLS models without SNP effects does not lead to an appreciable bias in the p values as long as the SNP effect is small (i.e. accounting for no more than 1% of trait variance). Copyright © 2011 S. Karger AG, Basel.

  7. Effects of Material Degradation on the Structural Integrity of Composite Materials: Experimental Investigation and Modeling of High Temperature Degradation Mechanisms

    NASA Technical Reports Server (NTRS)

    Cunningham, Ronan A.; McManus, Hugh L.

    1996-01-01

    It has previously been demonstrated that simple coupled reaction-diffusion models can approximate the aging behavior of PMR-15 resin subjected to different oxidative environments. Based on empirically observed phenomena, a model coupling chemical reactions, both thermal and oxidative, with diffusion of oxygen into the material bulk should allow simulation of the aging process. Through preliminary modeling techniques such as this it has become apparent that accurate analytical models cannot be created until the phenomena which cause the aging of these materials are quantified. An experimental program is currently underway to quantify all of the reaction/diffusion related mechanisms involved. The following contains a summary of the experimental data which has been collected through thermogravimetric analyses of neat PMR-15 resin, along with analytical predictions from models based on the empirical data. Thermogravimetric analyses were carried out in a number of different environments - nitrogen, air and oxygen. The nitrogen provides data for the purely thermal degradation mechanisms while those in air provide data for the coupled oxidative-thermal process. The intent here is to effectively subtract the nitrogen atmosphere data (assumed to represent only thermal reactions) from the air and oxygen atmosphere data to back-figure the purely oxidative reactions. Once purely oxidative (concentration dependent) reactions have been quantified it should then be possible to quantify the diffusion of oxygen into the material bulk.

  8. Angle-adjustable density field formulation for the modeling of crystalline microstructure

    NASA Astrophysics Data System (ADS)

    Wang, Zi-Le; Liu, Zhirong; Huang, Zhi-Feng

    2018-05-01

    A continuum density field formulation with particle-scale resolution is constructed to simultaneously incorporate the orientation dependence of interparticle interactions and the rotational invariance of the system, a fundamental but challenging issue in modeling the structure and dynamics of a broad range of material systems across variable scales. This generalized phase field crystal-type approach is based upon the complete expansion of particle direct correlation functions and the concept of isotropic tensors. Through applications to the modeling of various two- and three-dimensional crystalline structures, our study demonstrates the capability of bond-angle control in this continuum field theory and its effects on the emergence of ordered phases, and provides a systematic way of performing tunable angle analyses for crystalline microstructures.

  9. Solar Thermal Upper Stage Liquid Hydrogen Pressure Control Testing and Analytical Modeling

    NASA Technical Reports Server (NTRS)

    Olsen, A. D.; Cady, E. C.; Jenkins, D. S.; Chandler, F. O.; Grayson, G. D.; Lopez, A.; Hastings, L. J.; Flachbart, R. H.; Pedersen, K. W.

    2012-01-01

    The demonstration of a unique liquid hydrogen (LH2) storage and feed system concept for solar thermal upper stage was cooperatively accomplished by a Boeing/NASA Marshall Space Flight Center team. The strategy was to balance thermodynamic venting with the engine thrusting timeline during a representative 30-day mission, thereby, assuring no vent losses. Using a 2 cubic m (71 cubic ft) LH2 tank, proof-of-concept testing consisted of an engineering checkout followed by a 30-day mission simulation. The data were used to anchor a combination of standard analyses and computational fluid dynamics (CFD) modeling. Dependence on orbital testing has been incrementally reduced as CFD codes, combined with standard modeling, continue to be challenged with test data such as this.

  10. Analysis of impact melt and vapor production in CTH for planetary applications

    DOE PAGES

    Quintana, S. N.; Crawford, D. A.; Schultz, P. H.

    2015-05-19

    This study explores impact melt and vapor generation for a variety of impact speeds and materials using the shock physics code CTH. The study first compares the results of two common methods of impact melt and vapor generation to demonstrate that both the peak pressure method and final temperature method are appropriate for high-speed impact models (speeds greater than 10 km/s). However, for low-speed impact models (speeds less than 10 km/s), only the final temperature method is consistent with laboratory analyses to yield melting and vaporization. Finally, a constitutive model for material strength is important for low-speed impacts because strengthmore » can cause an increase in melting and vaporization.« less

  11. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  12. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  13. Analysis of impact melt and vapor production in CTH for planetary applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, S. N.; Crawford, D. A.; Schultz, P. H.

    This study explores impact melt and vapor generation for a variety of impact speeds and materials using the shock physics code CTH. The study first compares the results of two common methods of impact melt and vapor generation to demonstrate that both the peak pressure method and final temperature method are appropriate for high-speed impact models (speeds greater than 10 km/s). However, for low-speed impact models (speeds less than 10 km/s), only the final temperature method is consistent with laboratory analyses to yield melting and vaporization. Finally, a constitutive model for material strength is important for low-speed impacts because strengthmore » can cause an increase in melting and vaporization.« less

  14. Risk Factors for Sexual Aggression in Young Men: An Expansion of the Confluence Model

    PubMed Central

    Abbey, Antonia; Jacques-Tiura, Angela J.; LeBreton, James M.

    2011-01-01

    There are many explanations for high rates of sexual aggression, with no one theory dominating the field. This study extends past research by evaluating an expanded version of the confluence model with a community sample. One hour audio computer-assisted self-interviews were completed by 470 young single men. Using structural equation analyses, delinquency, hostile masculinity, impersonal sex, and misperception of women’s sexual cues were positively and directly associated with the number of sexually aggressive acts committed. There were also indirect effects of childhood victimization, personality traits associated with subclinical levels of psychopathy, and alcohol consumption. These findings demonstrate the usefulness of the confluence model, as well as the importance of broadening this theory to include additional constructs. PMID:21678429

  15. Didactic discussion of stochastic resonance effects and weak signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adair, R.K.

    1996-12-01

    A simple, paradigmatic, model is used to illustrate some general properties of effects subsumed under the label stochastic resonance. In particular, analyses of the transparent model show that (1) a small amount of noise added to a much larger signal can greatly increase the response to the signal, but (2) a weak signal added to much larger noise will not generate a substantial added response. The conclusions drawn from the model illustrate the general result that stochastic resonance effects do not provide an avenue for signals that are much smaller than noise to affect biology. A further analysis demonstrates themore » effects of small signals in the shifting of biologically important chemical equilibria under conditions where stochastic resonance effects are significant.« less

  16. Model selection for multi-component frailty models.

    PubMed

    Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert

    2007-11-20

    Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.

  17. SIRANERISK: Modelling dispersion of steady and unsteady pollutant releases in the urban canopy

    NASA Astrophysics Data System (ADS)

    Soulhac, L.; Lamaison, G.; Cierco, F.-X.; Ben Salem, N.; Salizzoni, P.; Mejean, P.; Armand, P.; Patryl, L.

    2016-09-01

    SIRANERISK is an operational model for the simulation of the dispersion of unsteady atmospheric releases of pollutant within and above an urban area. SIRANERISK is built on the same principles as the SIRANE model, and couples a street network model for the pollutant transfers within the urban canopy with a Gaussian puff model for the transfers above it. The performance of the model are here analysed by a detailed comparisons with wind-tunnel experiments. These experiments concern the dispersion of steady and unsteady pollutant releases within and above obstacle arrays with varying geometrical configurations, representing different topologies of idealised urban districts. The overall good agreement between numerical and experimental data demonstrates the reliability of SIRANERISK as an operational tool for the assessment of risk analysis and for the management of crises due to the accidental release of harmful airborne pollutants within a built environment.

  18. Variable-Internal-Stores models of microbial growth and metabolism with dynamic allocation of cellular resources.

    PubMed

    Nev, Olga A; van den Berg, Hugo A

    2017-01-01

    Variable-Internal-Stores models of microbial metabolism and growth have proven to be invaluable in accounting for changes in cellular composition as microbial cells adapt to varying conditions of nutrient availability. Here, such a model is extended with explicit allocation of molecular building blocks among various types of catalytic machinery. Such an extension allows a reconstruction of the regulatory rules employed by the cell as it adapts its physiology to changing environmental conditions. Moreover, the extension proposed here creates a link between classic models of microbial growth and analyses based on detailed transcriptomics and proteomics data sets. We ascertain the compatibility between the extended Variable-Internal-Stores model and the classic models, demonstrate its behaviour by means of simulations, and provide a detailed treatment of the uniqueness and the stability of its equilibrium point as a function of the availabilities of the various nutrients.

  19. Further refinement of the Escherichia coli brain abscess model in rat.

    PubMed

    Nazzaro, J M; Pagel, M A; Neuwelt, E A

    1992-09-01

    The rat brain abscess model provides a substrate for the modeling of delivery of therapeutic agents to intracerebral mass lesions. We now report refinement of the Escherichia coli brain abscess model in rat. A K1 surface antigen-negative E. coli isolated from human blood culture was stereotaxically inoculated into deep brain sites. Histopathologic analyses and quantitative cultures demonstrated the consistent production of lesions. No animal in this consecutive series developed meningitis, ventriculitis or sepsis. By contrast, prior experience with E. coli abscess production resulted in 25% failure rate of abscess production or death from sepsis. This improvement in the model may be attributable to specific characteristics of the bacteria used, modification of the inoculation method or the intracerebral placement technique. The present work suggests a reliable and consistent brain abscess model, which may be further used to study brain suppuration.

  20. PyCoTools: A Python Toolbox for COPASI.

    PubMed

    Welsh, Ciaran M; Fullard, Nicola; Proctor, Carole J; Martinez-Guimera, Alvaro; Isfort, Robert J; Bascom, Charles C; Tasseff, Ryan; Przyborski, Stefan A; Shanley, Daryl P

    2018-05-22

    COPASI is an open source software package for constructing, simulating and analysing dynamic models of biochemical networks. COPASI is primarily intended to be used with a graphical user interface but often it is desirable to be able to access COPASI features programmatically, with a high level interface. PyCoTools is a Python package aimed at providing a high level interface to COPASI tasks with an emphasis on model calibration. PyCoTools enables the construction of COPASI models and the execution of a subset of COPASI tasks including time courses, parameter scans and parameter estimations. Additional 'composite' tasks which use COPASI tasks as building blocks are available for increasing parameter estimation throughput, performing identifiability analysis and performing model selection. PyCoTools supports exploratory data analysis on parameter estimation data to assist with troubleshooting model calibrations. We demonstrate PyCoTools by posing a model selection problem designed to show case PyCoTools within a realistic scenario. The aim of the model selection problem is to test the feasibility of three alternative hypotheses in explaining experimental data derived from neonatal dermal fibroblasts in response to TGF-β over time. PyCoTools is used to critically analyse the parameter estimations and propose strategies for model improvement. PyCoTools can be downloaded from the Python Package Index (PyPI) using the command 'pip install pycotools' or directly from GitHub (https://github.com/CiaranWelsh/pycotools). Documentation at http://pycotools.readthedocs.io. Supplementary data are available at Bioinformatics.

  1. Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems.

    PubMed

    Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo

    2017-12-01

    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Conditionally reprogrammed normal and primary tumor prostate epithelial cells: a novel patient-derived cell model for studies of human prostate cancer

    PubMed Central

    Timofeeva, Olga A.; Palechor-Ceron, Nancy; Li, Guanglei; Yuan, Hang; Krawczyk, Ewa; Zhong, Xiaogang; Liu, Geng; Upadhyay, Geeta; Dakic, Aleksandra; Yu, Songtao; Fang, Shuang; Choudhury, Sujata; Zhang, Xueping; Ju, Andrew; Lee, Myeong-Seon; Dan, Han C.; Ji, Youngmi; Hou, Yong; Zheng, Yun-Ling; Albanese, Chris; Rhim, Johng; Schlegel, Richard; Dritschilo, Anatoly; Liu, Xuefeng

    2017-01-01

    Our previous study demonstrated that conditional reprogramming (CR) allows the establishment of patient-derived normal and tumor epithelial cell cultures from a variety of tissue types including breast, lung, colon and prostate. Using CR, we have established matched normal and tumor cultures, GUMC-29 and GUMC-30 respectively, from a patient's prostatectomy specimen. These CR cells proliferate indefinitely in vitro and retain stable karyotypes. Most importantly, only tumor-derived CR cells (GUMC-30) produced tumors in xenografted SCID mice, demonstrating maintenance of the critical tumor phenotype. Characterization of cells with DNA fingerprinting demonstrated identical patterns in normal and tumor CR cells as well as in xenografted tumors. By flow cytometry, both normal and tumor CR cells expressed basal, luminal, and stem cell markers, with the majority of the normal and tumor CR cells expressing prostate basal cell markers, CD44 and Trop2, as well as luminal marker, CD13, suggesting a transit-amplifying phenotype. Consistent with this phenotype, real time RT-PCR analyses demonstrated that CR cells predominantly expressed high levels of basal cell markers (KRT5, KRT14 and p63), and low levels of luminal markers. When the CR tumor cells were injected into SCID mice, the expression of luminal markers (AR, NKX3.1) increased significantly, while basal cell markers dramatically decreased. These data suggest that CR cells maintain high levels of proliferation and low levels of differentiation in the presence of feeder cells and ROCK inhibitor, but undergo differentiation once injected into SCID mice. Genomic analyses, including SNP and INDEL, identified genes mutated in tumor cells, including components of apoptosis, cell attachment, and hypoxia pathways. The use of matched patient-derived cells provides a unique in vitro model for studies of early prostate cancer. PMID:28009986

  3. Primary implant stability in a bone model simulating clinical situations for the posterior maxilla: an in vitro study

    PubMed Central

    2016-01-01

    Purpose The aim of this study was to determine the influence of anatomical conditions on primary stability in the models simulating posterior maxilla. Methods Polyurethane blocks were designed to simulate monocortical (M) and bicortical (B) conditions. Each condition had four subgroups measuring 3 mm (M3, B3), 5 mm (M5, B5), 8 mm (M8, B8), and 12 mm (M12, B12) in residual bone height (RBH). After implant placement, the implant stability quotient (ISQ), Periotest value (PTV), insertion torque (IT), and reverse torque (RT) were measured. Two-factor ANOVA (two cortical conditions×four RBHs) and additional analyses for simple main effects were performed. Results A significant interaction between cortical condition and RBH was demonstrated for all methods measuring stability with two-factor ANOVA. In the analyses for simple main effects, ISQ and PTV were statistically higher in the bicortical groups than the corresponding monocortical groups, respectively. In the monocortical group, ISQ and PTV showed a statistically significant rise with increasing RBH. Measurements of IT and RT showed a similar tendency, measuring highest in the M3 group, followed by the M8, the M5, and the M12 groups. In the bicortical group, all variables showed a similar tendency, with different degrees of rise and decline. The B8 group showed the highest values, followed by the B12, the B5, and the B3 groups. The highest coefficient was demonstrated between ISQ and PTV. Conclusions Primary stability was enhanced by the presence of bicortex and increased RBH, which may be better demonstrated by ISQ and PTV than by IT and RT. PMID:27588215

  4. Aeroelastic Airworthiness Assesment of the Adaptive Compliant Trailing Edge Flaps

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Spivey, Natalie D.; Lung, Shun-fat; Ervin, Gregory; Flick, Peter

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) demonstrator is a joint task under the National Aeronautics and Space Administration Environmentally Responsible Aviation Project in partnership with the Air Force Research Laboratory and FlexSys, Inc. (Ann Arbor, Michigan). The project goal is to develop advanced technologies that enable environmentally friendly aircraft, such as adaptive compliant technologies. The ACTE demonstrator flight-test program encompassed replacing the Fowler flaps on the SubsoniC Aircraft Testbed, a modified Gulfstream III (Gulfstream Aerospace, Savannah, Georgia) aircraft, with control surfaces developed by FlexSys. The control surfaces developed by FlexSys are a pair of uniquely-designed unconventional flaps to be used as lifting surfaces during flight-testing to validate their structural effectiveness. The unconventional flaps required a multidisciplinary airworthiness assessment to prove they could withstand the prescribed flight envelope. Several challenges were posed due to the large deflections experienced by the structure, requiring non-linear analysis methods. The aeroelastic assessment necessitated both conventional and extensive testing and analysis methods. A series of ground vibration tests (GVTs) were conducted to provide modal characteristics to validate and update finite element models (FEMs) used for the flutter analyses for a subset of the various flight configurations. Numerous FEMs were developed using data from FlexSys and the ground tests. The flap FEMs were then attached to the aircraft model to generate a combined FEM that could be analyzed for aeroelastic instabilities. The aeroelastic analysis results showed the combined system of aircraft and flaps were predicted to have the required flutter margin to successfully demonstrate the adaptive compliant technology. This paper documents the details of the aeroelastic airworthiness assessment described, including the ground testing and analyses, and subsequent flight-testing performed on the unconventional ACTE flaps.

  5. Naringin in Ganshuang Granule suppresses activation of hepatic stellate cells for anti-fibrosis effect by inhibition of mammalian target of rapamycin.

    PubMed

    Shi, Hongbo; Shi, Honglin; Ren, Feng; Chen, Dexi; Chen, Yu; Duan, Zhongping

    2017-03-01

    A previous study has demonstrated that Ganshuang granule (GSG) plays an anti-fibrotic role partially by deactivation of hepatic stellate cells (HSCs). In HSCs activation, mammalian target of rapamycin (mTOR)-autophagy plays an important role. We attempted to investigate the role of mTOR-autophagy in anti-fibrotic effect of GSG. The cirrhotic mouse model was prepared to demonstrate the anti-fibrosis effect of GSG. High performance liquid chromatography (HPLC) analyses were used to identify the active component of GSG. The primary mouse HSCs were isolated and naringin was added into activated HSCs to observe its anti-fibrotic effect. 3-methyladenine (3-MA) and Insulin-like growth factor-1 (IGF-1) was added, respectively, into fully activated HSCs to explore the role of autophagy and mTOR. GSG played an anti-fibrotic role through deactivation of HSCs in cirrhotic mouse model. The concentration of naringin was highest in GSG by HPLC analyses and naringin markedly suppressed HSCs activation in vitro, which suggested that naringin was the main active component of GSG. The deactivation of HSCs caused by naringin was not because of the autophagic activation but mTOR inhibition, which was supported by the following evidence: first, naringin induced autophagic activation, but when autophagy was blocked by 3-MA, deactivation of HSCs was not attenuated or reversed. Second, naringin inhibited mTOR pathway, meanwhile when mTOR was activated by IGF-1, deactivation of HSCs was reversed. In conclusion, we have demonstrated naringin in GSG suppressed activation of HSCs for anti-fibrosis effect by inhibition of mTOR, indicating a potential therapeutic application for liver cirrhosis. © 2016 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  6. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  7. How coping styles, cognitive distortions, and attachment predict problem gambling among adolescents and young adults.

    PubMed

    Calado, Filipa; Alexandre, Joana; Griffiths, Mark D

    2017-12-01

    Background and aims Recent research suggests that youth problem gambling is associated with several factors, but little is known how these factors might influence or interact each other in predicting this behavior. Consequently, this is the first study to examine the mediation effect of coping styles in the relationship between attachment to parental figures and problem gambling. Methods A total of 988 adolescents and emerging adults were recruited to participate. The first set of analyses tested the adequacy of a model comprising biological, cognitive, and family variables in predicting youth problem gambling. The second set of analyses explored the relationship between family and individual variables in problem gambling behavior. Results The results of the first set of analyses demonstrated that the individual factors of gender, cognitive distortions, and coping styles showed a significant predictive effect on youth problematic gambling, and the family factors of attachment and family structure did not reveal a significant influence on this behavior. The results of the second set of analyses demonstrated that the attachment dimension of angry distress exerted a more indirect influence on problematic gambling, through emotion-focused coping style. Discussion This study revealed that some family variables can have a more indirect effect on youth gambling behavior and provided some insights in how some factors interact in predicting problem gambling. Conclusion These findings suggest that youth gambling is a multifaceted phenomenon, and that the indirect effects of family variables are important in estimating the complex social forces that might influence adolescent decisions to gamble.

  8. Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1982-01-01

    The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.

  9. Three-dimensional variational assimilation of MODIS aerosol optical depth: Implementation and application to a dust storm over East Asia

    NASA Astrophysics Data System (ADS)

    Liu, Zhiquan; Liu, Quanhua; Lin, Hui-Chuan; Schwartz, Craig S.; Lee, Yen-Huei; Wang, Tijian

    2011-12-01

    Assimilation of the Moderate Resolution Imaging Spectroradiometer (MODIS) total aerosol optical depth (AOD) retrieval products (at 550 nm wavelength) from both Terra and Aqua satellites have been developed within the National Centers for Environmental Prediction (NCEP) Gridpoint Statistical Interpolation (GSI) three-dimensional variational (3DVAR) data assimilation system. This newly developed algorithm allows, in a one-step procedure, the analysis of 3-D mass concentration of 14 aerosol variables from the Goddard Chemistry Aerosol Radiation and Transport (GOCART) module. The Community Radiative Transfer Model (CRTM) was extended to calculate AOD using GOCART aerosol variables as input. Both the AOD forward model and corresponding Jacobian model were developed within the CRTM and used in the 3DVAR minimization algorithm to compute the AOD cost function and its gradient with respect to 3-D aerosol mass concentration. The impact of MODIS AOD data assimilation was demonstrated by application to a dust storm from 17 to 24 March 2010 over East Asia. The aerosol analyses initialized Weather Research and Forecasting/Chemistry (WRF/Chem) model forecasts. Results indicate that assimilating MODIS AOD substantially improves aerosol analyses and subsequent forecasts when compared to MODIS AOD, independent AOD observations from the Aerosol Robotic Network (AERONET) and Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument, and surface PM10 (particulate matter with diameters less than 10 μm) observations. The newly developed AOD data assimilation system can serve as a tool to improve simulations of dust storms and general air quality analyses and forecasts.

  10. Pancreatic Tissue Transplanted in TheraCyte Encapsulation Devices Is Protected and Prevents Hyperglycemia in a Mouse Model of Immune-Mediated Diabetes.

    PubMed

    Boettler, Tobias; Schneider, Darius; Cheng, Yang; Kadoya, Kuniko; Brandon, Eugene P; Martinson, Laura; von Herrath, Matthias

    2016-01-01

    Type 1 diabetes (T1D) is characterized by destruction of glucose-responsive insulin-producing pancreatic β-cells and exhibits immune infiltration of pancreatic islets, where CD8 lymphocytes are most prominent. Curative transplantation of pancreatic islets is seriously hampered by the persistence of autoreactive immune cells that require high doses of immunosuppressive drugs. An elegant approach to confer graft protection while obviating the need for immunosuppression is the use of encapsulation devices that allow for the transfer of oxygen and nutrients, yet prevent immune cells from making direct contact with the islet grafts. Here we demonstrate that macroencapsulation devices (TheraCyte) loaded with neonatal pancreatic tissue and transplanted into RIP-LCMV.GP mice prevented disease onset in a model of virus-induced diabetes mellitus. Histological analyses revealed that insulin-producing cells survived within the device in animal models of diabetes. Our results demonstrate that these encapsulation devices can protect from an immune-mediated attack and can contain a sufficient amount of insulin-producing cells to prevent overt hyperglycemia.

  11. Can biomechanical variables predict improvement in crouch gait?

    PubMed Central

    Hicks, Jennifer L.; Delp, Scott L.; Schwartz, Michael H.

    2011-01-01

    Many patients respond positively to treatments for crouch gait, yet surgical outcomes are inconsistent and unpredictable. In this study, we developed a multivariable regression model to determine if biomechanical variables and other subject characteristics measured during a physical exam and gait analysis can predict which subjects with crouch gait will demonstrate improved knee kinematics on a follow-up gait analysis. We formulated the model and tested its performance by retrospectively analyzing 353 limbs of subjects who walked with crouch gait. The regression model was able to predict which subjects would demonstrate ‘improved’ and ‘unimproved’ knee kinematics with over 70% accuracy, and was able to explain approximately 49% of the variance in subjects’ change in knee flexion between gait analyses. We found that improvement in stance phase knee flexion was positively associated with three variables that were drawn from knowledge about the biomechanical contributors to crouch gait: i) adequate hamstrings lengths and velocities, possibly achieved via hamstrings lengthening surgery, ii) normal tibial torsion, possibly achieved via tibial derotation osteotomy, and iii) sufficient muscle strength. PMID:21616666

  12. Exploring the Mechanisms of Ecological Land Change Based on the Spatial Autoregressive Model: A Case Study of the Poyang Lake Eco-Economic Zone, China

    PubMed Central

    Xie, Hualin; Liu, Zhifei; Wang, Peng; Liu, Guiying; Lu, Fucai

    2013-01-01

    Ecological land is one of the key resources and conditions for the survival of humans because it can provide ecosystem services and is particularly important to public health and safety. It is extremely valuable for effective ecological management to explore the evolution mechanisms of ecological land. Based on spatial statistical analyses, we explored the spatial disparities and primary potential drivers of ecological land change in the Poyang Lake Eco-economic Zone of China. The results demonstrated that the global Moran’s I value is 0.1646 during the 1990 to 2005 time period and indicated significant positive spatial correlation (p < 0.05). The results also imply that the clustering trend of ecological land changes weakened in the study area. Some potential driving forces were identified by applying the spatial autoregressive model in this study. The results demonstrated that the higher economic development level and industrialization rate were the main drivers for the faster change of ecological land in the study area. This study also tested the superiority of the spatial autoregressive model to study the mechanisms of ecological land change by comparing it with the traditional linear regressive model. PMID:24384778

  13. A theory of planned behaviour-based analysis of TIMSS 2011 to determine factors influencing inquiry teaching practices in high-performing countries

    NASA Astrophysics Data System (ADS)

    Pongsophon, Pongprapan; Herman, Benjamin C.

    2017-07-01

    Given the abundance of literature describing the strong relationship between inquiry-based teaching and student achievement, more should be known about the factors impacting science teachers' classroom inquiry implementation. This study utilises the theory of planned behaviour to propose and validate a causal model of inquiry-based teaching through analysing data relating to high-performing countries retrieved from the 2011 Trends in International Mathematics and Science Study assessments. Data analysis was completed through structural equation modelling using a polychoric correlation matrix for data input and diagonally weighted least squares estimation. Adequate fit of the full model to the empirical data was realised. The model demonstrates that the extent the teachers participated in academic collaborations was positively related to their occupational satisfaction, confidence in teaching inquiry, and classroom inquiry practices. Furthermore, the teachers' confidence with implementing inquiry was positively related to their classroom inquiry implementation and occupational satisfaction. However, perceived student-generated constraints demonstrated a negative relationship with the teachers' confidence with implementing inquiry and occupational satisfaction. Implications from this study include supporting teachers through promoting collaborative opportunities that facilitate inquiry-based practices and occupational satisfaction.

  14. How well do simulated last glacial maximum tropical temperatures constrain equilibrium climate sensitivity?

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.

    2015-07-01

    Previous work demonstrated a significant correlation between tropical surface air temperature and equilibrium climate sensitivity (ECS) in PMIP (Paleoclimate Modelling Intercomparison Project) phase 2 model simulations of the last glacial maximum (LGM). This implies that reconstructed LGM cooling in this region could provide information about the climate system ECS value. We analyze results from new simulations of the LGM performed as part of Coupled Model Intercomparison Project (CMIP5) and PMIP phase 3. These results show no consistent relationship between the LGM tropical cooling and ECS. A radiative forcing and feedback analysis shows that a number of factors are responsible for this decoupling, some of which are related to vegetation and aerosol feedbacks. While several of the processes identified are LGM specific and do not impact on elevated CO2 simulations, this analysis demonstrates one area where the newer CMIP5 models behave in a qualitatively different manner compared with the older ensemble. The results imply that so-called Earth System components such as vegetation and aerosols can have a significant impact on the climate response in LGM simulations, and this should be taken into account in future analyses.

  15. Empirically derived personality subtyping for predicting clinical symptoms and treatment response in bulimia nervosa.

    PubMed

    Haynos, Ann F; Pearson, Carolyn M; Utzinger, Linsey M; Wonderlich, Stephen A; Crosby, Ross D; Mitchell, James E; Crow, Scott J; Peterson, Carol B

    2017-05-01

    Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Using variables from the Dimensional Assessment of Personality Pathology-Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = 0.03) and purging (p = 0.01) frequency at EOT and binge eating frequency at follow-up (p = 0.045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = 0.04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Empirically derived personality subtyping appears to be a valid classification system with potential to guide eating disorder treatment decisions. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2017; 50:506-514). © 2016 Wiley Periodicals, Inc.

  16. Chemical profiling of ancient hearths reveals recurrent salmon use in Ice Age Beringia

    PubMed Central

    Choy, Kyungcheol; Potter, Ben A.; McKinney, Holly J.; Reuther, Joshua D.; Wang, Shiway W.; Wooller, Matthew J.

    2016-01-01

    Current approaches to reconstruct subsistence and dietary trends in ancient hunter-gatherer societies include stable isotope analyses, but these have focused on human remains, cooking pottery, and food residues, which are relatively rare in the archaeological record. In contrast, short-term hearths are more ubiquitous worldwide, and these features can provide valuable evidence for ancient subsistence practices, particularly when faunal remains are not preserved. To test the suitability of hearths for this purpose, we conducted multiple chemical analyses: stable carbon and nitrogen isotope analyses of total organic matter (expressed as δ13C and δ15N values) and compound-specific carbon isotope analyses of individual fatty acids (δ13C16:0 and δ13C18:0) from 17 well-preserved hearths present in three occupations dating between ∼13,200–11,500 calibrated years B.P. at the Upward Sun River (USR) site in central Alaska. We combined δ15N and δ13CFA data in a Bayesian mixing model (stable isotope analysis in R) with concentration dependency to each hearth. Our model values were tested against faunal indices, indicating a strong positive relationship between marine proportional contributions to each hearth and salmon abundance. Results of the models show substantial anadromous salmon use in multiple USR components, indicating recurrent use of the site for salmon processing during the terminal Pleistocene. Our results demonstrate that salmonid and freshwater resources were more important for late Pleistocene hunter-gatherers than previously thought and highlight the potential of chemical profiling of hearth organic residues for providing greater geographic and temporal insights into resource use by prepottery societies. PMID:27573838

  17. Chemical profiling of ancient hearths reveals recurrent salmon use in Ice Age Beringia.

    PubMed

    Choy, Kyungcheol; Potter, Ben A; McKinney, Holly J; Reuther, Joshua D; Wang, Shiway W; Wooller, Matthew J

    2016-08-30

    Current approaches to reconstruct subsistence and dietary trends in ancient hunter-gatherer societies include stable isotope analyses, but these have focused on human remains, cooking pottery, and food residues, which are relatively rare in the archaeological record. In contrast, short-term hearths are more ubiquitous worldwide, and these features can provide valuable evidence for ancient subsistence practices, particularly when faunal remains are not preserved. To test the suitability of hearths for this purpose, we conducted multiple chemical analyses: stable carbon and nitrogen isotope analyses of total organic matter (expressed as δ(13)C and δ(15)N values) and compound-specific carbon isotope analyses of individual fatty acids (δ(13)C16:0 and δ(13)C18:0) from 17 well-preserved hearths present in three occupations dating between ∼13,200-11,500 calibrated years B.P. at the Upward Sun River (USR) site in central Alaska. We combined δ(15)N and δ(13)CFA data in a Bayesian mixing model (stable isotope analysis in R) with concentration dependency to each hearth. Our model values were tested against faunal indices, indicating a strong positive relationship between marine proportional contributions to each hearth and salmon abundance. Results of the models show substantial anadromous salmon use in multiple USR components, indicating recurrent use of the site for salmon processing during the terminal Pleistocene. Our results demonstrate that salmonid and freshwater resources were more important for late Pleistocene hunter-gatherers than previously thought and highlight the potential of chemical profiling of hearth organic residues for providing greater geographic and temporal insights into resource use by prepottery societies.

  18. Background radiation in inelastic X-ray scattering and X-ray emission spectroscopy. A study for Johann-type spectrometers

    NASA Astrophysics Data System (ADS)

    Paredes Mellone, O. A.; Bianco, L. M.; Ceppi, S. A.; Goncalves Honnicke, M.; Stutz, G. E.

    2018-06-01

    A study of the background radiation in inelastic X-ray scattering (IXS) and X-ray emission spectroscopy (XES) based on an analytical model is presented. The calculation model considers spurious radiation originated from elastic and inelastic scattering processes along the beam paths of a Johann-type spectrometer. The dependence of the background radiation intensity on the medium of the beam paths (air and helium), analysed energy and radius of the Rowland circle was studied. The present study shows that both for IXS and XES experiments the background radiation is dominated by spurious radiation owing to scattering processes along the sample-analyser beam path. For IXS experiments the spectral distribution of the main component of the background radiation shows a weak linear dependence on the energy for the most cases. In the case of XES, a strong non-linear behaviour of the background radiation intensity was predicted for energy analysis very close to the backdiffraction condition, with a rapid increase in intensity as the analyser Bragg angle approaches π / 2. The contribution of the analyser-detector beam path is significantly weaker and resembles the spectral distribution of the measured spectra. Present results show that for usual experimental conditions no appreciable structures are introduced by the background radiation into the measured spectra, both in IXS and XES experiments. The usefulness of properly calculating the background profile is demonstrated in a background subtraction procedure for a real experimental situation. The calculation model was able to simulate with high accuracy the energy dependence of the background radiation intensity measured in a particular XES experiment with air beam paths.

  19. Optimal control of anthracnose using mixed strategies.

    PubMed

    Fotsa Mbogne, David Jaures; Thron, Christopher

    2015-11-01

    In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Counterfactual simulations applied to SHRP2 crashes: The effect of driver behavior models on safety benefit estimations of intelligent safety systems.

    PubMed

    Bärgman, Jonas; Boda, Christian-Nils; Dozza, Marco

    2017-05-01

    As the development and deployment of in-vehicle intelligent safety systems (ISS) for crash avoidance and mitigation have rapidly increased in the last decades, the need to evaluate their prospective safety benefits before introduction has never been higher. Counterfactual simulations using relevant mathematical models (for vehicle dynamics, sensors, the environment, ISS algorithms, and models of driver behavior) have been identified as having high potential. However, although most of these models are relatively mature, models of driver behavior in the critical seconds before a crash are still relatively immature. There are also large conceptual differences between different driver models. The objective of this paper is, firstly, to demonstrate the importance of the choice of driver model when counterfactual simulations are used to evaluate two ISS: Forward collision warning (FCW), and autonomous emergency braking (AEB). Secondly, the paper demonstrates how counterfactual simulations can be used to perform sensitivity analyses on parameter settings, both for driver behavior and ISS algorithms. Finally, the paper evaluates the effect of the choice of glance distribution in the driver behavior model on the safety benefit estimation. The paper uses pre-crash kinematics and driver behavior from 34 rear-end crashes from the SHRP2 naturalistic driving study for the demonstrations. The results for FCW show a large difference in the percent of avoided crashes between conceptually different models of driver behavior, while differences were small for conceptually similar models. As expected, the choice of model of driver behavior did not affect AEB benefit much. Based on our results, researchers and others who aim to evaluate ISS with the driver in the loop through counterfactual simulations should be sure to make deliberate and well-grounded choices of driver models: the choice of model matters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Implementation of a Blowing Boundary Condition in the LAURA Code

    NASA Technical Reports Server (NTRS)

    Thompson, Richard a.; Gnoffo, Peter A.

    2008-01-01

    Preliminary steps toward modeling a coupled ablation problem using a finite-volume Navier-Stokes code (LAURA) are presented in this paper. Implementation of a surface boundary condition with mass transfer (blowing) is described followed by verification and validation through comparisons with analytic results and experimental data. Application of the code to a carbon-nosetip ablation problem is demonstrated and the results are compared with previously published data. It is concluded that the code and coupled procedure are suitable to support further ablation analyses and studies.

  2. Space shuttle hypergolic bipropellant RCS engine design study, Bell model 8701

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A research program was conducted to define the level of the current technology base for reaction control system rocket engines suitable for space shuttle applications. The project consisted of engine analyses, design, fabrication, and tests. The specific objectives are: (1) extrapolating current engine design experience to design of an RCS engine with required safety, reliability, performance, and operational capability, (2) demonstration of multiple reuse capability, and (3) identification of current design and technology deficiencies and critical areas for future effort.

  3. Sex as a Biological Variable: Who, What, When, Why, and How.

    PubMed

    Bale, Tracy L; Epperson, C Neill

    2017-01-01

    The inclusion of sex as a biological variable in research is absolutely essential for improving our understanding of disease mechanisms contributing to risk and resilience. Studies focusing on examining sex differences have demonstrated across many levels of analyses and stages of brain development and maturation that males and females can differ significantly. This review will discuss examples of animal models and clinical studies to provide guidance and reference for the inclusion of sex as an important biological variable relevant to a Neuropsychopharmacology audience.

  4. Cell Motility Dynamics: A Novel Segmentation Algorithm to Quantify Multi-Cellular Bright Field Microscopy Images

    PubMed Central

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600

  5. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    PubMed

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.

  6. Genomic signal processing: from matrix algebra to genetic networks.

    PubMed

    Alter, Orly

    2007-01-01

    DNA microarrays make it possible, for the first time, to record the complete genomic signals that guide the progression of cellular processes. Future discovery in biology and medicine will come from the mathematical modeling of these data, which hold the key to fundamental understanding of life on the molecular level, as well as answers to questions regarding diagnosis, treatment, and drug development. This chapter reviews the first data-driven models that were created from these genome-scale data, through adaptations and generalizations of mathematical frameworks from matrix algebra that have proven successful in describing the physical world, in such diverse areas as mechanics and perception: the singular value decomposition model, the generalized singular value decomposition model comparative model, and the pseudoinverse projection integrative model. These models provide mathematical descriptions of the genetic networks that generate and sense the measured data, where the mathematical variables and operations represent biological reality. The variables, patterns uncovered in the data, correlate with activities of cellular elements such as regulators or transcription factors that drive the measured signals and cellular states where these elements are active. The operations, such as data reconstruction, rotation, and classification in subspaces of selected patterns, simulate experimental observation of only the cellular programs that these patterns represent. These models are illustrated in the analyses of RNA expression data from yeast and human during their cell cycle programs and DNA-binding data from yeast cell cycle transcription factors and replication initiation proteins. Two alternative pictures of RNA expression oscillations during the cell cycle that emerge from these analyses, which parallel well-known designs of physical oscillators, convey the capacity of the models to elucidate the design principles of cellular systems, as well as guide the design of synthetic ones. In these analyses, the power of the models to predict previously unknown biological principles is demonstrated with a prediction of a novel mechanism of regulation that correlates DNA replication initiation with cell cycle-regulated RNA transcription in yeast. These models may become the foundation of a future in which biological systems are modeled as physical systems are today.

  7. Coevolution of female and male genital components to avoid genital size mismatches in sexually dimorphic spiders.

    PubMed

    Lupše, Nik; Cheng, Ren-Chung; Kuntner, Matjaž

    2016-08-17

    In most animal groups, it is unclear how body size variation relates to genital size differences between the sexes. While most morphological features tend to scale with total somatic size, this does not necessarily hold for genitalia because divergent evolution in somatic size between the sexes would cause genital size mismatches. Theory predicts that the interplay of female-biased sexual size dimorphism (SSD) and sexual genital size dimorphism (SGD) should adhere to the 'positive genital divergence', the 'constant genital divergence', or the 'negative genital divergence' model, but these models remain largely untested. We test their validity in the spider family Nephilidae known for the highest degrees of SSD among terrestrial animals. Through comparative analyses of sex-specific somatic and genital sizes, we first demonstrate that 99 of the 351 pairs of traits are phylogenetically correlated. Through factor analyses we then group these traits for MCMCglmm analyses that test broader correlation patterns, and these reveal significant correlations in 10 out of the 36 pairwise comparisons. Both types of analyses agree that female somatic and internal genital sizes evolve independently. While sizes of non-intromittent male genital parts coevolve with male body size, the size of the intromittent male genital parts is independent of the male somatic size. Instead, male intromittent genital size coevolves with female (external and, in part, internal) genital size. All analyses also agree that SGD and SSD evolve independently. Internal dimensions of female genitalia evolve independently of female body size in nephilid spiders, and similarly, male intromittent genital size evolves independently of the male body size. The size of the male intromittent organ (the embolus) and the sizes of female internal and external genital components thus seem to respond to selection against genital size mismatches. In accord with these interpretations, we reject the validity of the existing theoretical models of genital and somatic size dimorphism in spiders.

  8. Logical-rules and the classification of integral dimensions: individual differences in the processing of arbitrary dimensions

    PubMed Central

    Blunden, Anthea G.; Wang, Tony; Griffiths, David W.; Little, Daniel R.

    2015-01-01

    A variety of converging operations demonstrate key differences between separable dimensions, which can be analyzed independently, and integral dimensions, which are processed in a non-analytic fashion. A recent investigation of response time distributions, applying a set of logical rule-based models, demonstrated that integral dimensions are pooled into a single coactive processing channel, in contrast to separable dimensions, which are processed in multiple, independent processing channels. This paper examines the claim that arbitrary dimensions created by factorially morphing four faces are processed in an integral manner. In two experiments, 16 participants completed a categorization task in which either upright or inverted morph stimuli were classified in a speeded fashion. Analyses focused on contrasting different assumptions about the psychological representation of the stimuli, perceptual and decisional separability, and the processing architecture. We report consistent individual differences which demonstrate a mixture of some observers who demonstrate coactive processing with other observers who process the dimensions in a parallel self-terminating manner. PMID:25620941

  9. Use of Classification Agreement Analyses to Evaluate RTI Implementation

    ERIC Educational Resources Information Center

    VanDerHeyden, Amanda

    2010-01-01

    RTI as a framework for decision making has implications for the diagnosis of specific learning disabilities. Any diagnostic tool must meet certain standards to demonstrate that its use leads to predictable decisions with minimal risk. Classification agreement analyses are described as optimal for demonstrating the technical adequacy of RTI…

  10. A first-principles model for estimating the prevalence of annoyance with aircraft noise exposure.

    PubMed

    Fidell, Sanford; Mestre, Vincent; Schomer, Paul; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy

    2011-08-01

    Numerous relationships between noise exposure and transportation noise-induced annoyance have been inferred by curve-fitting methods. The present paper develops a different approach. It derives a systematic relationship by applying an a priori, first-principles model to the findings of forty three studies of the annoyance of aviation noise. The rate of change of annoyance with day-night average sound level (DNL) due to aircraft noise exposure was found to closely resemble the rate of change of loudness with sound level. The agreement of model predictions with the findings of recent curve-fitting exercises (cf. Miedma and Vos, 1998) is noteworthy, considering that other analyses have relied on different analytic methods and disparate data sets. Even though annoyance prevalence rates within individual communities consistently grow in proportion to duration-adjusted loudness, variability in annoyance prevalence rates across communities remains great. The present analyses demonstrate that 1) community-specific differences in annoyance prevalence rates can be plausibly attributed to the joint effect of acoustic and non-DNL related factors and (2) a simple model can account for the aggregate influences of non-DNL related factors on annoyance prevalence rates in different communities in terms of a single parameter expressed in DNL units-a "community tolerance level."

  11. Coherence resonance and stochastic resonance in directionally coupled rings

    NASA Astrophysics Data System (ADS)

    Werner, Johannes Peter; Benner, Hartmut; Florio, Brendan James; Stemler, Thomas

    2011-11-01

    In coupled systems, symmetry plays an important role for the collective dynamics. We investigate the dynamical response to noise with and without weak periodic modulation for two classes of ring systems. Each ring system consists of unidirectionally coupled bistable elements but in one class, the number of elements is even while in the other class the number is odd. Consequently, the rings without forcing show at a certain coupling strength, either ordering (similar to anti-ferromagnetic chains) or auto-oscillations. Analysing the bifurcations and fixed points of the two ring classes enables us to explain the dynamical response measured to noise and weak modulation. Moreover, by analysing a simplified model, we demonstrate that the response is universal for systems having a directional component in their stochastic dynamics in phase space around the origin.

  12. A study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.

    1980-01-01

    NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.

  13. Creation of a Rapid High-Fidelity Aerodynamics Module for a Multidisciplinary Design Environment

    NASA Technical Reports Server (NTRS)

    Srinivasan, Muktha; Whittecar, William; Edwards, Stephen; Mavris, Dimitri N.

    2012-01-01

    In the traditional aerospace vehicle design process, each successive design phase is accompanied by an increment in the modeling fidelity of the disciplinary analyses being performed. This trend follows a corresponding shrinking of the design space as more and more design decisions are locked in. The correlated increase in knowledge about the design and decrease in design freedom occurs partly because increases in modeling fidelity are usually accompanied by significant increases in the computational expense of performing the analyses. When running high fidelity analyses, it is not usually feasible to explore a large number of variations, and so design space exploration is reserved for conceptual design, and higher fidelity analyses are run only once a specific point design has been selected to carry forward. The designs produced by this traditional process have been recognized as being limited by the uncertainty that is present early on due to the use of lower fidelity analyses. For example, uncertainty in aerodynamics predictions produces uncertainty in trajectory optimization, which can impact overall vehicle sizing. This effect can become more significant when trajectories are being shaped by active constraints. For example, if an optimal trajectory is running up against a normal load factor constraint, inaccuracies in the aerodynamic coefficient predictions can cause a feasible trajectory to be considered infeasible, or vice versa. For this reason, a trade must always be performed between the desired fidelity and the resources available. Apart from this trade between fidelity and computational expense, it is very desirable to use higher fidelity analyses earlier in the design process. A large body of work has been performed to this end, led by efforts in the area of surrogate modeling. In surrogate modeling, an up-front investment is made by running a high fidelity code over a Design of Experiments (DOE); once completed, the DOE data is used to create a surrogate model, which captures the relationships between input variables and responses into regression equations. Depending on the dimensionality of the problem and the fidelity of the code for which a surrogate model is being created, the initial DOE can itself be computationally prohibitive to run. Cokriging, a modeling approach from the field of geostatistics, provides a desirable compromise between computational expense and fidelity. To do this, cokriging leverages a large body of data generated by a low fidelity analysis, combines it with a smaller set of data from a higher fidelity analysis, and creates a kriging surrogate model with prediction fidelity approaching that of the higher fidelity analysis. When integrated into a multidisciplinary environment, a disciplinary analysis module employing cokriging can raise the analysis fidelity without drastically impacting the expense of design iterations. This is demonstrated through the creation of an aerodynamics analysis module in NASA s OpenMDAO framework. Aerodynamic analyses including Missile DATCOM, APAS, and USM3D are leveraged to create high fidelity aerodynamics decks for parametric vehicle geometries, which are created in NASA s Vehicle Sketch Pad (VSP). Several trade studies are performed to examine the achieved level of model fidelity, and the overall impact to vehicle design is quantified.

  14. Developing the Communicative Participation Item Bank: Rasch Analysis Results From a Spasmodic Dysphonia Sample

    PubMed Central

    Baylor, Carolyn R.; Yorkston, Kathryn M.; Eadie, Tanya L.; Miller, Robert M.; Amtmann, Dagmar

    2011-01-01

    Purpose The purpose of this study was to conduct the initial psychometric analyses of the Communicative Participation Item Bank—a new self-report instrument designed to measure the extent to which communication disorders interfere with communicative participation. This item bank is intended for community-dwelling adults across a range of communication disorders. Method A set of 141 candidate items was administered to 208 adults with spasmodic dysphonia. Participants rated the extent to which their condition interfered with participation in various speaking communication situations. Questionnaires were administered online or in a paper version per participant preference. Participants also completed the Voice Handicap Index (B. H. Jacobson et al., 1997) and a demographic questionnaire. Rasch analyses were conducted using Winsteps software (J. M. Linacre, 1991). Results The results show that items functioned better when the 5-category response format was recoded to a 4-category format. After removing 8 items that did not fit the Rasch model, the remaining 133 items demonstrated strong evidence of sufficient unidimensionality, with the model accounting for 89.3% of variance. Item location values ranged from −2.73 to 2.20 logits. Conclusions Preliminary Rasch analyses of the Communicative Participation Item Bank show strong psychometric properties. Further testing in populations with other communication disorders is needed. PMID:19717652

  15. Comparative analysis of methods for determining bite force in the spiny dogfish Squalus acanthias.

    PubMed

    Huber, Daniel Robert; Motta, Philip Jay

    2004-01-01

    Many studies have identified relationships between the forces generated by the cranial musculature during feeding and cranial design. Particularly important to understanding the diversity of cranial form amongst vertebrates is knowledge of the generated magnitudes of bite force because of its use as a measure of ecological performance. In order to determine an accurate morphological proxy for bite force in elasmobranchs, theoretical force generation by the quadratomandibularis muscle of the spiny dogfish Squalus acanthias was modeled using a variety of morphological techniques, and lever-ratio analyses were used to determine resultant bite forces. These measures were compared to in vivo bite force measurements obtained with a pressure transducer during tetanic stimulation experiments of the quadratomandibularis. Although no differences were found between the theoretical and in vivo bite forces measured, modeling analyses indicate that the quadratomandibularis muscle should be divided into its constituent divisions and digital images of the cross-sections of these divisions should be used to estimate cross-sectional area when calculating theoretical force production. From all analyses the maximum bite force measured was 19.57 N. This relatively low magnitude of bite force is discussed with respect to the ecomorphology of the feeding mechanism of S. acanthias to demonstrate the interdependence of morphology, ecology, and behavior in organismal design. Copyright 2004 Wiley-Liss, Inc.

  16. Quantitative Secretome Analysis of Activated Jurkat Cells Using Click Chemistry-Based Enrichment of Secreted Glycoproteins.

    PubMed

    Witzke, Kathrin E; Rosowski, Kristin; Müller, Christian; Ahrens, Maike; Eisenacher, Martin; Megger, Dominik A; Knobloch, Jürgen; Koch, Andrea; Bracht, Thilo; Sitek, Barbara

    2017-01-06

    Quantitative secretome analyses are a high-performance tool for the discovery of physiological and pathophysiological changes in cellular processes. However, serum supplements in cell culture media limit secretome analyses, but serum depletion often leads to cell starvation and consequently biased results. To overcome these limiting factors, we investigated a model of T cell activation (Jurkat cells) and performed an approach for the selective enrichment of secreted proteins from conditioned medium utilizing metabolic marking of newly synthesized glycoproteins. Marked glycoproteins were labeled via bioorthogonal click chemistry and isolated by affinity purification. We assessed two labeling compounds conjugated with either biotin or desthiobiotin and the respective secretome fractions. 356 proteins were quantified using the biotin probe and 463 using desthiobiotin. 59 proteins were found differentially abundant (adjusted p-value ≤0.05, absolute fold change ≥1.5) between inactive and activated T cells using the biotin method and 86 using the desthiobiotin approach, with 31 mutual proteins cross-verified by independent experiments. Moreover, we analyzed the cellular proteome of the same model to demonstrate the benefit of secretome analyses and provide comprehensive data sets of both. 336 proteins (61.3%) were quantified exclusively in the secretome. Data are available via ProteomeXchange with identifier PXD004280.

  17. Analysis of longitudinal data from animals where some data are missing in SPSS

    PubMed Central

    Duricki, DA; Soleman, S; Moon, LDF

    2017-01-01

    Testing of therapies for disease or injury often involves analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly where some data are missing) yet are not used widely by pre-clinical researchers. We provide here an easy to use protocol for analysing longitudinal data from animals and present a click-by-click guide for performing suitable analyses using the statistical package SPSS. We guide readers through analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. We show that repeated measures analysis of covariance failed to detect a treatment effect when a few data points were missing (due to animal drop-out) whereas analysis using an alternative method detected a beneficial effect of treatment; specifically, we demonstrate the superiority of linear models (with various covariance structures) analysed using Restricted Maximum Likelihood estimation (to include all available data). This protocol takes two hours to follow. PMID:27196723

  18. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach

    PubMed Central

    Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert

    2015-01-01

    Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010

  19. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  20. An analytical design approach for self-powered active lateral secondary suspensions for railway vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Li, Hong; Zhang, Jiye; Mei, TX

    2015-10-01

    In this paper, an analytical design approach for the development of self-powered active suspensions is investigated and is applied to optimise the control system design for an active lateral secondary suspension for railway vehicles. The conditions for energy balance are analysed and the relationship between the ride quality improvement and energy consumption is discussed in detail. The modal skyhook control is applied to analyse the energy consumption of this suspension by separating its dynamics into the lateral and yaw modes, and based on a simplified model, the average power consumption of actuators is computed in frequency domain by using the power spectral density of lateral alignment of track irregularities. Then the impact of control gains and actuators' key parameters on the performance for both vibration suppressing and energy recovery/storage is analysed. Computer simulation is used to verify the obtained energy balance condition and to demonstrate that the improved ride comfort is achieved by this self-powered active suspension without any external power supply.

  1. A Rasch Analysis of Assessments of Morning and Evening Fatigue in Oncology Patients Using the Lee Fatigue Scale.

    PubMed

    Lerdal, Anners; Kottorp, Anders; Gay, Caryl; Aouizerat, Bradley E; Lee, Kathryn A; Miaskowski, Christine

    2016-06-01

    To accurately investigate diurnal variations in fatigue, a measure needs to be psychometrically sound and demonstrate stable item function in relationship to time of day. Rasch analysis is a modern psychometric approach that can be used to evaluate these characteristics. To evaluate, using Rasch analysis, the psychometric properties of the Lee Fatigue Scale (LFS) in a sample of oncology patients. The sample comprised 587 patients (mean age 57.3 ± 11.9 years, 80% women) undergoing chemotherapy for breast, gastrointestinal, gynecological, or lung cancer. Patients completed the 13-item LFS within 30 minutes of awakening (i.e., morning fatigue) and before going to bed (i.e., evening fatigue). Rasch analysis was used to assess validity and reliability. In initial analyses of differential item function, eight of the 13 items functioned differently depending on whether the LFS was completed in the morning or in the evening. Subsequent analyses were conducted separately for the morning and evening fatigue assessments. Nine of the morning fatigue items and 10 of the evening fatigue items demonstrated acceptable goodness-of-fit to the Rasch model. Principal components analyses indicated that both morning and evening assessments demonstrated unidimensionality. Person-separation indices indicated that both morning and evening fatigue scales were able to distinguish four distinct strata of fatigue severity. Excluding four items from the morning fatigue scale and three items from the evening fatigue scale improved the psychometric properties of the LFS for assessing diurnal variations in fatigue severity in oncology patients. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  2. Investigating strength and frequency effects in recognition memory using type-2 signal detection theory.

    PubMed

    Higham, Philip A; Perfect, Timothy J; Bruno, Davide

    2009-01-01

    Criterion- versus distribution-shift accounts of frequency and strength effects in recognition memory were investigated with Type-2 signal detection receiver operating characteristic (ROC) analysis, which provides a measure of metacognitive monitoring. Experiment 1 demonstrated a frequency-based mirror effect, with a higher hit rate and lower false alarm rate, for low frequency words compared with high frequency words. In Experiment 2, the authors manipulated item strength with repetition, which showed an increased hit rate but no effect on the false alarm rate. Whereas Type-1 indices were ambiguous as to whether these effects were based on a criterion- or distribution-shift model, the two models predict opposite effects on Type-2 distractor monitoring under some assumptions. Hence, Type-2 ROC analysis discriminated between potential models of recognition that could not be discriminated using Type-1 indices alone. In Experiment 3, the authors manipulated Type-1 response bias by varying the number of old versus new response categories to confirm the assumptions made in Experiments 1 and 2. The authors conclude that Type-2 analyses are a useful tool for investigating recognition memory when used in conjunction with more traditional Type-1 analyses.

  3. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    PubMed

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  4. Hourly indoor radon measurements in a research house.

    PubMed

    Sesana, Lucia; Begnini, Stefania

    2004-01-01

    This paper reports and discusses the behaviour of radon concentration with time in an uninhabited dwelling. The relationship between variations in radon concentrations and indoor-outdoor temperatures and wind intensity has also been discussed. Radon concentration was measured hourly in a house located at a height of 800 m in the Lombard Prealps, at the top of the Valassina valley. The wind velocity and indoor-outdoor temperatures were measured by means of a meteorological station located on the terrace of the house. The data were analysed using the LBL model for indoor-outdoor air exchange and the models for the indoor accumulation of radon due to exhalation from building materials and pressure-driven infiltrations located underground. The role of wind and indoor-outdoor temperatures were analysed. The agreement of measurements with modelling clearly demonstrates the importance of the different sources of indoor radon. As the investigation was conducted in an uninhabited house, the measurements were not affected by the behaviour of people, e.g. opening and closing of windows. Measurements of the outdoor atmospheric concentrations of (222)Rn provide an index of the atmospheric stability, the formation of thermal inversions and convective turbulence.

  5. `spup' - An R Package for Analysis of Spatial Uncertainty Propagation and Application to Trace Gas Emission Simulations

    NASA Astrophysics Data System (ADS)

    Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.

    2016-12-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.

  6. Ensemble Simulations with Coupled Atmospheric Dynamic and Dispersion Models: Illustrating Uncertainties in Dosage Simulations.

    NASA Astrophysics Data System (ADS)

    Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.

    2002-05-01

    Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.

  7. Identifying traits for genotypic adaptation using crop models.

    PubMed

    Ramirez-Villegas, Julian; Watson, James; Challinor, Andrew J

    2015-06-01

    Genotypic adaptation involves the incorporation of novel traits in crop varieties so as to enhance food productivity and stability and is expected to be one of the most important adaptation strategies to future climate change. Simulation modelling can provide the basis for evaluating the biophysical potential of crop traits for genotypic adaptation. This review focuses on the use of models for assessing the potential benefits of genotypic adaptation as a response strategy to projected climate change impacts. Some key crop responses to the environment, as well as the role of models and model ensembles for assessing impacts and adaptation, are first reviewed. Next, the review describes crop-climate models can help focus the development of future-adapted crop germplasm in breeding programmes. While recently published modelling studies have demonstrated the potential of genotypic adaptation strategies and ideotype design, it is argued that, for model-based studies of genotypic adaptation to be used in crop breeding, it is critical that modelled traits are better grounded in genetic and physiological knowledge. To this aim, two main goals need to be pursued in future studies: (i) a better understanding of plant processes that limit productivity under future climate change; and (ii) a coupling between genetic and crop growth models-perhaps at the expense of the number of traits analysed. Importantly, the latter may imply additional complexity (and likely uncertainty) in crop modelling studies. Hence, appropriately constraining processes and parameters in models and a shift from simply quantifying uncertainty to actually quantifying robustness towards modelling choices are two key aspects that need to be included into future crop model-based analyses of genotypic adaptation. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  8. Integrating systems biology models and biomedical ontologies

    PubMed Central

    2011-01-01

    Background Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. Results We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. Conclusions We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms. PMID:21835028

  9. An investigation of the mentalization-based model of borderline pathology in adolescents.

    PubMed

    Quek, Jeremy; Bennett, Clair; Melvin, Glenn A; Saeedi, Naysun; Gordon, Michael S; Newman, Louise K

    2018-07-01

    According to mentalization-based theory, transgenerational transmission of mentalization from caregiver to offspring is implicated in the pathogenesis of borderline personality disorder (BPD). Recent research has demonstrated an association between hypermentalizing (excessive, inaccurate mental state reasoning) and BPD, indicating the particular relevance of this form of mentalizing dysfunction to the transgenerational mentalization-based model. As yet, no study has empirically assessed a transgenerational mentalization-based model of BPD. The current study sought firstly to test the mentalization-based model, and additionally, to determine the form of mentalizing dysfunction in caregivers (e.g., hypo- or hypermentalizing) most relevant to a hypermentalizing model of BPD. Participants were a mixed sample of adolescents with BPD and a sample of non-clinical adolescents, and their respective primary caregivers (n = 102; 51 dyads). Using an ecologically valid measure of mentalization, mediational analyses were conducted to examine the relationships between caregiver mentalizing, adolescent mentalizing, and adolescent borderline features. Findings demonstrated that adolescent mentalization mediated the effect of caregiver mentalization on adolescent borderline personality pathology. Furthermore, results indicated that hypomentalizing in caregivers was related to adolescent borderline personality pathology via an effect on adolescent hypermentalizing. Results provide empirical support for the mentalization-based model of BPD, and suggest the indirect influence of caregiver mentalization on adolescent borderline psychopathology. Results further indicate the relevance of caregiver hypomentalizing to a hypermentalizing model of BPD. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Learning to apply models of materials while explaining their properties

    NASA Astrophysics Data System (ADS)

    Karpin, Tiia; Juuti, Kalle; Lavonen, Jari

    2014-09-01

    Background:Applying structural models is important to chemistry education at the upper secondary level, but it is considered one of the most difficult topics to learn. Purpose:This study analyses to what extent in designed lessons students learned to apply structural models in explaining the properties and behaviours of various materials. Sample:An experimental group is 27 Finnish upper secondary school students and control group included 18 students from the same school. Design and methods:In quasi-experimental setting, students were guided through predict, observe, explain activities in four practical work situations. It was intended that the structural models would encourage students to learn how to identify and apply appropriate models when predicting and explaining situations. The lessons, organised over a one-week period, began with a teacher's demonstration and continued with student experiments in which they described the properties and behaviours of six household products representing three different materials. Results:Most students in the experimental group learned to apply the models correctly, as demonstrated by post-test scores that were significantly higher than pre-test scores. The control group showed no significant difference between pre- and post-test scores. Conclusions:The findings indicate that the intervention where students engage in predict, observe, explain activities while several materials and models are confronted at the same time, had a positive effect on learning outcomes.

  11. ROCKETSHIP: a flexible and modular software tool for the planning, processing and analysis of dynamic MRI studies.

    PubMed

    Barnes, Samuel R; Ng, Thomas S C; Santa-Maria, Naomi; Montagne, Axel; Zlokovic, Berislav V; Jacobs, Russell E

    2015-06-16

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a promising technique to characterize pathology and evaluate treatment response. However, analysis of DCE-MRI data is complex and benefits from concurrent analysis of multiple kinetic models and parameters. Few software tools are currently available that specifically focuses on DCE-MRI analysis with multiple kinetic models. Here, we developed ROCKETSHIP, an open-source, flexible and modular software for DCE-MRI analysis. ROCKETSHIP incorporates analyses with multiple kinetic models, including data-driven nested model analysis. ROCKETSHIP was implemented using the MATLAB programming language. Robustness of the software to provide reliable fits using multiple kinetic models is demonstrated using simulated data. Simulations also demonstrate the utility of the data-driven nested model analysis. Applicability of ROCKETSHIP for both preclinical and clinical studies is shown using DCE-MRI studies of the human brain and a murine tumor model. A DCE-MRI software suite was implemented and tested using simulations. Its applicability to both preclinical and clinical datasets is shown. ROCKETSHIP was designed to be easily accessible for the beginner, but flexible enough for changes or additions to be made by the advanced user as well. The availability of a flexible analysis tool will aid future studies using DCE-MRI. A public release of ROCKETSHIP is available at https://github.com/petmri/ROCKETSHIP .

  12. Bacterial Phenotype Variants in Group B Streptococcal Toxic Shock Syndrome1

    PubMed Central

    Johansson, Linda; Dahesh, Samira; Van Sorge, Nina M.; Darenberg, Jessica; Norgren, Mari; Sjölin, Jan; Nizet, Victor; Norrby-Teglund, Anna

    2009-01-01

    We conducted genetic and functional analyses of isolates from a patient with group B streptococcal (GBS) necrotizing fasciitis and toxic shock syndrome. Tissue cultures simultaneously showed colonies with high hemolysis (HH) and low hemolysis (LH). Conversely, the HH and LH variants exhibited low capsule (LC) and high capsule (HC) expression, respectively. Molecular analysis demonstrated that the 2 GBS variants were of the same clonal origin. Genetic analysis found a 3-bp deletion in the covR gene of the HH/LC variant. Functionally, this isolate was associated with an increased growth rate in vitro and with higher interleukin-8 induction. However, in whole blood, opsonophagocytic and intracellular killing assays, the LH/HC phenotype demonstrated higher resistance to host phagocytic killing. In a murine model, LH/HC resulted in higher levels of bacteremia and increased host mortality rate. These findings demonstrate differences in GBS isolates of the same clonal origin but varying phenotypes. PMID:19193266

  13. Bacterial phenotype variants in group B streptococcal toxic shock syndrome.

    PubMed

    Sendi, Parham; Johansson, Linda; Dahesh, Samira; Van-Sorge, Nina M; Darenberg, Jessica; Norgren, Mari; Sjölin, Jan; Nizet, Victor; Norrby-Teglund, Anna

    2009-02-01

    We conducted genetic and functional analyses of isolates from a patient with group B streptococcal (GBS) necrotizing fasciitis and toxic shock syndrome. Tissue cultures simultaneously showed colonies with high hemolysis (HH) and low hemolysis (LH). Conversely, the HH and LH variants exhibited low capsule (LC) and high capsule (HC) expression, respectively. Molecular analysis demonstrated that the 2 GBS variants were of the same clonal origin. Genetic analysis found a 3-bp deletion in the covR gene of the HH/LC variant. Functionally, this isolate was associated with an increased growth rate in vitro and with higher interleukin-8 induction. However, in whole blood, opsonophagocytic and intracellular killing assays, the LH/HC phenotype demonstrated higher resistance to host phagocytic killing. In a murine model, LH/HC resulted in higher levels of bacteremia and increased host mortality rate. These findings demonstrate differences in GBS isolates of the same clonal origin but varying phenotypes.

  14. Adaptive evolution of complex innovations through stepwise metabolic niche expansion.

    PubMed

    Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A; Lercher, Martin J; Pál, Csaba; Papp, Balázs

    2016-05-20

    A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes.

  15. Adaptive evolution of complex innovations through stepwise metabolic niche expansion

    PubMed Central

    Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A.; Lercher, Martin J.; Pál, Csaba; Papp, Balázs

    2016-01-01

    A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes. PMID:27197754

  16. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  17. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  18. Hierarchical demographic approaches for assessing invasion dynamics of non-indigenous species: An example using northern snakehead (Channa argus)

    USGS Publications Warehouse

    Jiao, Y.; Lapointe, N.W.R.; Angermeier, P.L.; Murphy, B.R.

    2009-01-01

    Models of species' demographic features are commonly used to understand population dynamics and inform management tactics. Hierarchical demographic models are ideal for the assessment of non-indigenous species because our knowledge of non-indigenous populations is usually limited, data on demographic traits often come from a species' native range, these traits vary among populations, and traits are likely to vary considerably over time as species adapt to new environments. Hierarchical models readily incorporate this spatiotemporal variation in species' demographic traits by representing demographic parameters as multi-level hierarchies. As is done for traditional non-hierarchical matrix models, sensitivity and elasticity analyses are used to evaluate the contributions of different life stages and parameters to estimates of population growth rate. We applied a hierarchical model to northern snakehead (Channa argus), a fish currently invading the eastern United States. We used a Monte Carlo approach to simulate uncertainties in the sensitivity and elasticity analyses and to project future population persistence under selected management tactics. We gathered key biological information on northern snakehead natural mortality, maturity and recruitment in its native Asian environment. We compared the model performance with and without hierarchy of parameters. Our results suggest that ignoring the hierarchy of parameters in demographic models may result in poor estimates of population size and growth and may lead to erroneous management advice. In our case, the hierarchy used multi-level distributions to simulate the heterogeneity of demographic parameters across different locations or situations. The probability that the northern snakehead population will increase and harm the native fauna is considerable. Our elasticity and prognostic analyses showed that intensive control efforts immediately prior to spawning and/or juvenile-dispersal periods would be more effective (and probably require less effort) than year-round control efforts. Our study demonstrates the importance of considering the hierarchy of parameters in estimating population growth rate and evaluating different management strategies for non-indigenous invasive species. ?? 2009 Elsevier B.V.

  19. Seasonal variability of atmospheric tides in the mesosphere and lower thermosphere: meteor radar data and simulations

    NASA Astrophysics Data System (ADS)

    Pokhotelov, Dimitry; Becker, Erich; Stober, Gunter; Chau, Jorge L.

    2018-06-01

    Thermal tides play an important role in the global atmospheric dynamics and provide a key mechanism for the forcing of thermosphere-ionosphere dynamics from below. A method for extracting tidal contributions, based on the adaptive filtering, is applied to analyse multi-year observations of mesospheric winds from ground-based meteor radars located in northern Germany and Norway. The observed seasonal variability of tides is compared to simulations with the Kühlungsborn Mechanistic Circulation Model (KMCM). It is demonstrated that the model provides reasonable representation of the tidal amplitudes, though substantial differences from observations are also noticed. The limitations of applying a conventionally coarse-resolution model in combination with parametrisation of gravity waves are discussed. The work is aimed towards the development of an ionospheric model driven by the dynamics of the KMCM.

  20. A canonical neural mechanism for behavioral variability

    PubMed Central

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-01-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5–6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these ‘universal' statistics. PMID:28530225

  1. The NASA/Industry Design Analysis Methods for Vibrations (DAMVIBS) Program - A government overview. [of rotorcraft technology development using finite element method

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    1992-01-01

    An overview is presented of government contributions to the program called Design Analysis Methods for Vibrations (DAMV) which attempted to develop finite-element-based analyses of rotorcraft vibrations. NASA initiated the program with a finite-element modeling program for the CH-47D tandem-rotor helicopter. The DAMV program emphasized four areas including: airframe finite-element modeling, difficult components studies, coupled rotor-airframe vibrations, and airframe structural optimization. Key accomplishments of the program include industrywide standards for modeling metal and composite airframes, improved industrial designs for vibrations, and the identification of critical structural contributors to airframe vibratory responses. The program also demonstrated the value of incorporating secondary modeling details to improving correlation, and the findings provide the basis for an improved finite-element-based dynamics design-analysis capability.

  2. (F)UV Spectral Analysis of Hot, Hydrogen-Rich Central Stars of Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Ziegler, M.; Rauch, T.; Werner, K.; Kruk, J. W.

    2010-11-01

    Metal abundances of CSPNe are not well known although they provide important constraints on AGB nucleosynthesis. We aim to determine metal abundances of two hot, hydrogen-rich CSPNe (namely of A35 and NGC3587, the latter also known as M97 or the Owl Nebula) and to derive Teff and log g precisely from high-resolution, high-S/N (far-) ultraviolet observations obtained with FUSE and HST/STIS. For this purpose, we utilize NLTE model atmospheres calculated with TMAP, the Tübingen Model Atmosphere Package. Due to strong line absorption of the ISM, simultaneous modeling of interstellar features has become a standard tool in our analyses. We present preliminary results, demonstrating the importance of combining stellar and interstellar models, in order to clearly identify and measure the strengths of strategic photospheric lines.

  3. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    DOE PAGES

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...

    2014-02-24

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less

  4. Aeroelastic stability analyses of two counter rotating propfan designs for a cruise missile model

    NASA Technical Reports Server (NTRS)

    Mahajan, Aparajit J.; Lucero, John M.; Mehmed, Oral; Stefko, George L.

    1992-01-01

    A modal aeroelastic analysis combining structural and aerodynamic models is applied to counterrotating propfans to evaluate their structural integrity for wind-tunnel testing. The aeroelastic analysis code is an extension of the 2D analysis code called the Aeroelastic Stability and Response of Propulsion Systems. Rotational speed and freestream Mach number are the parameters for calculating the stability of the two blade designs with a modal method combining a finite-element structural model with 2D steady and unsteady cascade aerodynamic models. The model demonstrates convergence to the least stable aeroelastic mode, describes the effects of a nonuniform inflow, and permits the modification of geometry and rotation. The analysis shows that the propfan designs are suitable for the wind-tunnel test and confirms that the propfans should be flutter-free under the range of conditions of the testing.

  5. Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.

    PubMed

    Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan

    2018-05-21

    This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.

  6. The confluence model: birth order as a within-family or between-family dynamic?

    PubMed

    Zajonc, R B; Sulloway, Frank J

    2007-09-01

    The confluence model explains birth-order differences in intellectual performance by quantifying the changing dynamics within the family. Wichman, Rodgers, and MacCallum (2006) claimed that these differences are a between-family phenomenon--and hence are not directly related to birth order itself. The study design and analyses presented by Wichman et al. nevertheless suffer from crucial shortcomings, including their use of unfocused tests, which cause statistically significant trends to be overlooked. In addition, Wichman et al. treated birth-order effects as a linear phenomenon thereby ignoring the confluence model's prediction that these two samples may manifest opposing results based on age. This article cites between- and within-family data that demonstrate systematic birth-order effects as predicted by the confluence model. The corpus of evidence invoked here offers strong support for the assumption of the confluence model that birth-order differences in intellectual performance are primarily a within-family phenomenon.

  7. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  8. Memory-induced nonlinear dynamics of excitation in cardiac diseases.

    PubMed

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  9. Integrated modeling approach for optimal management of water, energy and food security nexus

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Vesselinov, Velimir V.

    2017-03-01

    Water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-period socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. The obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.

  10. Memory-induced nonlinear dynamics of excitation in cardiac diseases

    NASA Astrophysics Data System (ADS)

    Landaw, Julian; Qu, Zhilin

    2018-04-01

    Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.

  11. Adsorption of selected pharmaceuticals and an endocrine disrupting compound by granular activated carbon. 2. Model prediction.

    PubMed

    Yu, Zirui; Peldszus, Sigrid; Huck, Peter M

    2009-03-01

    The adsorption of two representative pharmaceutically active compounds (PhACs)-naproxen and carbamazepine and one endocrine disrupting compound (EDC)-nonylphenol was studied in pilot-scale granular activated carbon (GAC) adsorbers using post-sedimentation (PS) water from a full-scale drinking water treatment plant. Acidic naproxen broke through fastest while nonylphenol was removed best, which was consistent with the degree to which fouling affected compound removals. Model predictions and experimental data were generally in good agreement for all three compounds, which demonstrated the effectiveness and robustness of the pore and surface diffusion model (PSDM) used in combination with the time-variable parameter approach for predicting removals at environmentally relevant concentrations (i.e., ng/L range). Sensitivity analyses suggested that accurate determination of film diffusion coefficients was critical for predicting breakthrough for naproxen and carbamazepine, in particular when high removals are targeted. Model simulations demonstrated that GAC carbon usage rates (CURs) for naproxen were substantially influenced by the empty bed contact time (EBCT) at the investigated conditions. Model-based comparisons between GAC CURs and minimum CURs for powdered activated carbon (PAC) applications suggested that PAC would be most appropriate for achieving 90% removal of naproxen, whereas GAC would be more suitable for nonylphenol.

  12. Applying phasor approach analysis of multiphoton FLIM measurements to probe the metabolic activity of three-dimensional in vitro cell culture models

    PubMed Central

    Lakner, Pirmin H.; Monaghan, Michael G.; Möller, Yvonne; Olayioye, Monilola A.; Schenke-Layland, Katja

    2017-01-01

    Fluorescence lifetime imaging microscopy (FLIM) can measure and discriminate endogenous fluorophores present in biological samples. This study seeks to identify FLIM as a suitable method to non-invasively detect a shift in cellular metabolic activity towards glycolysis or oxidative phosphorylation in 3D Caco-2 models of colorectal carcinoma. These models were treated with potassium cyanide or hydrogen peroxide as controls, and epidermal growth factor (EGF) as a physiologically-relevant influencer of cell metabolic behaviour. Autofluorescence, attributed to nicotinamide adenine dinucleotide (NADH), was induced by two-photon laser excitation and its lifetime decay was analysed using a standard multi-exponential decay approach and also a novel custom-written code for phasor-based analysis. While both methods enabled detection of a statistically significant shift of metabolic activity towards glycolysis using potassium cyanide, and oxidative phosphorylation using hydrogen peroxide, employing the phasor approach required fewer initial assumptions to quantify the lifetimes of contributing fluorophores. 3D Caco-2 models treated with EGF had increased glucose consumption, production of lactate, and presence of ATP. FLIM analyses of these cultures revealed a significant shift in the contribution of protein-bound NADH towards free NADH, indicating increased glycolysis-mediated metabolic activity. This data demonstrate that FLIM is suitable to interpret metabolic changes in 3D in vitro models. PMID:28211922

  13. Applying phasor approach analysis of multiphoton FLIM measurements to probe the metabolic activity of three-dimensional in vitro cell culture models.

    PubMed

    Lakner, Pirmin H; Monaghan, Michael G; Möller, Yvonne; Olayioye, Monilola A; Schenke-Layland, Katja

    2017-02-13

    Fluorescence lifetime imaging microscopy (FLIM) can measure and discriminate endogenous fluorophores present in biological samples. This study seeks to identify FLIM as a suitable method to non-invasively detect a shift in cellular metabolic activity towards glycolysis or oxidative phosphorylation in 3D Caco-2 models of colorectal carcinoma. These models were treated with potassium cyanide or hydrogen peroxide as controls, and epidermal growth factor (EGF) as a physiologically-relevant influencer of cell metabolic behaviour. Autofluorescence, attributed to nicotinamide adenine dinucleotide (NADH), was induced by two-photon laser excitation and its lifetime decay was analysed using a standard multi-exponential decay approach and also a novel custom-written code for phasor-based analysis. While both methods enabled detection of a statistically significant shift of metabolic activity towards glycolysis using potassium cyanide, and oxidative phosphorylation using hydrogen peroxide, employing the phasor approach required fewer initial assumptions to quantify the lifetimes of contributing fluorophores. 3D Caco-2 models treated with EGF had increased glucose consumption, production of lactate, and presence of ATP. FLIM analyses of these cultures revealed a significant shift in the contribution of protein-bound NADH towards free NADH, indicating increased glycolysis-mediated metabolic activity. This data demonstrate that FLIM is suitable to interpret metabolic changes in 3D in vitro models.

  14. Numerical analyses of ventilated cavitation over a 2-D NACA0015 hydrofoil using two turbulence modeling methods

    NASA Astrophysics Data System (ADS)

    Yang, Dan-dan; Yu, An; Ji, Bin; Zhou, Jia-jian; Luo, Xian-wu

    2018-04-01

    The present paper studies the ventilated cavitation over a NACA0015 hydrofoil by numerical methods. The corresponding cavity evolutions are obtained at three ventilation rates by using the level set method. To depict the complicated turbulent flow structure, the filter-based density corrected model (FBDCM) and the modified partially-averaged Navier-Stokes (MPANS) model are applied in the present numerical analyses. It is indicated that the predicted results of the cavitation shedding dynamics by both turbulence models agree fairly well with the experimental data. It is also noted that the shedding frequency and the super cavity length predicted by the MPANS method are closer to the experiment data as compared to that predicted by the FBDCM model. The simulation results show that in the ventilated cavitation, the vapor cavity and the air cavity have the same shedding frequency. As the ventilated rate increases, the vapor cavity is depressed rapidly. The cavitation-vortex interaction in the ventilated cavitation is studied based on the vorticity transport equation (VTE) and the Lagrangian coherent structure (LCS). Those results demonstrate that the vortex dilatation and baroclinic torque terms are highly dependent on the evolution of the cavitation. In addition, from the LCSs and the tracer particles in the flow field, one may see the process from the attached cavity to the cloud cavity.

  15. A wind energy benchmark for ABL modelling of a diurnal cycle with a nocturnal low-level jet: GABLS3 revisited

    DOE PAGES

    Rodrigo, J. Sanz; Churchfield, M.; Kosović, B.

    2016-10-03

    The third GEWEX Atmospheric Boundary Layer Studies (GABLS3) model intercomparison study, around the Cabauw met tower in the Netherlands, is revisited as a benchmark for wind energy atmospheric boundary layer (ABL) models. The case was originally developed by the boundary layer meteorology community, interested in analysing the performance of single-column and large-eddy simulation atmospheric models dealing with a diurnal cycle leading to the development of a nocturnal low-level jet. The case addresses fundamental questions related to the definition of the large-scale forcing, the interaction of the ABL with the surface and the evaluation of model results with observations. The characterizationmore » of mesoscale forcing for asynchronous microscale modelling of the ABL is discussed based on momentum budget analysis of WRF simulations. Then a single-column model is used to demonstrate the added value of incorporating different forcing mechanisms in microscale models. The simulations are evaluated in terms of wind energy quantities of interest.« less

  16. Reduced-Order Aerothermoelastic Analysis of Hypersonic Vehicle Structures

    NASA Astrophysics Data System (ADS)

    Falkiewicz, Nathan J.

    Design and simulation of hypersonic vehicles require consideration of a variety of disciplines due to the highly coupled nature of the flight regime. In order to capture all of the potential effects on vehicle dynamics, one must consider the aerodynamics, aerodynamic heating, heat transfer, and structural dynamics as well as the interactions between these disciplines. The problem is further complicated by the large computational expense involved in capturing all of these effects and their interactions in a full-order sense. While high-fidelity modeling techniques exist for each of these disciplines, the use of such techniques is computationally infeasible in a vehicle design and control system simulation setting for such a highly coupled problem. Early in the design stage, many iterations of analyses may need to be carried out as the vehicle design matures, thus requiring quick analysis turnaround time. Additionally, the number of states used in the analyses must be small enough to allow for efficient control simulation and design. As a result, alternatives to full-order models must be considered. This dissertation presents a fully coupled, reduced-order aerothermoelastic framework for the modeling and analysis of hypersonic vehicle structures. The reduced-order transient thermal solution is a modal solution based on the proper orthogonal decomposition. The reduced-order structural dynamic model is based on projection of the equations of motion onto a Ritz modal subspace that is identified a priori. The reduced-order models are assembled into a time-domain aerothermoelastic simulation framework which uses a partitioned time-marching scheme to account for the disparate time scales of the associated physics. The aerothermoelastic modeling framework is outlined and the formulations associated with the unsteady aerodynamics, aerodynamic heating, transient thermal, and structural dynamics are outlined. Results demonstrate the accuracy of the reduced-order transient thermal and structural dynamic models under variation in boundary conditions and flight conditions. The framework is applied to representative hypersonic vehicle control surface structures and a variety of studies are conducted to assess the impact of aerothermoelastic effects on hypersonic vehicle dynamics. The results presented in this dissertation demonstrate the ability of the proposed framework to perform efficient aerothermoelastic analysis.

  17. Commentary: Demonstrating Cost-Effectiveness in Pediatric Psychology

    PubMed Central

    2014-01-01

    Objective Changes in the health care system and payment plans will likely require pediatric psychologists to illustrate the impact of their services. Cost-effectiveness analyses are one method of demonstrating the potential economic benefits of our services but are rarely used by pediatric psychologists. Method A hypothetical cost-effectiveness analysis was conducted, comparing the costs and outcomes between a behavioral adherence intervention and no intervention for youth with acute lymphoblastic leukemia. Results Results illustrate how pediatric psychologists can use cost-effectiveness analyses to demonstrate the economic impact of their work. Conclusions Efforts to conduct economic analyses could allow pediatric psychologists to advocate for their services. Implications and future directions are discussed. PMID:24752732

  18. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    PubMed Central

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging and on dynamic interaction between aging and longevity. We investigated properties of genes related to selected variants and their roles in signaling and metabolic pathways. Results We showed that the use of different QC procedures results in different sets of genetic variants associated with life span. We selected 24 genetic variants negatively associated with life span. We showed that the joint analyses of genetic data at the time of bio-specimen collection and follow up data substantially improved significance of associations of selected 24 SNPs with life span. We also showed that aging related changes in physiological variables and in hidden biomarkers of aging differ for the groups of carriers and non-carriers of selected variants. Conclusions . The results of these analyses demonstrated benefits of using biodemographic models and methods in genetic association studies of these traits. Our findings showed that the absence of a large number of genetic variants with deleterious effects may make substantial contribution to exceptional longevity. These effects are dynamically mediated by a number of physiological variables and hidden biomarkers of aging. The results of these research demonstrated benefits of using integrative statistical models of mortality risks in genetic studies of human aging and longevity. PMID:27773987

  19. The Application of Observational Practice and Educational Networking in Simulation-Based and Distributed Medical Education Contexts.

    PubMed

    Welsher, Arthur; Rojas, David; Khan, Zain; VanderBeek, Laura; Kapralos, Bill; Grierson, Lawrence E M

    2018-02-01

    Research has revealed that individuals can improve technical skill performance by viewing demonstrations modeled by either expert or novice performers. These findings support the development of video-based observational practice communities that augment simulation-based skill education and connect geographically distributed learners. This study explores the experimental replicability of the observational learning effect when demonstrations are sampled from a community of distributed learners and serves as a context for understanding learner experiences within this type of training protocol. Participants from 3 distributed medical campuses engaged in a simulation-based learning study of the elliptical excision in which they completed a video-recorded performance before being assigned to 1 of 3 groups for a 2-week observational practice intervention. One group observed expert demonstrations, another observed novice demonstrations, and the third observed a combination of both. Participants returned for posttesting immediately and 1 month after the intervention. Participants also engaged in interviews regarding their perceptions of the usability and relevance of video-based observational practice to clinical education. Checklist (P < 0.0001) and global rating (P < 0.0001) measures indicate that participants, regardless of group assignment, improved after the intervention and after a 1-month retention period. Analyses revealed no significant differences between groups. Qualitative analyses indicate that participants perceived the observational practice platform to be usable, relevant, and potentially improved with enhanced feedback delivery. Video-based observational practice involving expert and/or novice demonstrations enhances simulation-based skill learning in a group of geographically distributed trainees. These findings support the use of Internet-mediated observational learning communities in distributed and simulation-based medical education contexts.

  20. Comparison of full 3-D, thin-film 3-D, and thin-film plate analyses of a postbuckled embedded delamination

    NASA Technical Reports Server (NTRS)

    Whitcomb, John D.

    1989-01-01

    Strain-energy release rates are often used to predict when delamination growth will occur in laminates under compression. Because of the inherently high computational cost of performing such analyses, less rigorous analyses such as thin-film plate analysis were used. The assumptions imposed by plate theory restrict the analysis to the calculation of total strain energy, G(sub t). The objective is to determine the accuracy of thin-film plate analysis by comparing the distribution of G(sub t) calculated using fully three dimensional (3D), thin-film 3D, and thin-film plate analyses. Thin-film 3D analysis is the same as thin-film plate analysis, except 3D analysis is used to model the sublaminate. The 3D stress analyses were performed using the finite element program NONLIN3D. The plate analysis results were obtained from published data, which used STAGS. Strain-energy release rates were calculated using variations of the virtual crack closure technique. The results demonstrate that thin-film plate analysis can predict the distribution of G(sub t) quite well, at least for the configurations considered. Also, these results verify the accuracy of the strain-energy release rate procedure for plate analysis.

Top